Jul 12 00:05:40.425715 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:05:40.425741 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:05:40.425749 kernel: KASLR enabled Jul 12 00:05:40.425755 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:05:40.425763 kernel: printk: bootconsole [pl11] enabled Jul 12 00:05:40.425769 kernel: efi: EFI v2.7 by EDK II Jul 12 00:05:40.425777 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:05:40.425783 kernel: random: crng init done Jul 12 00:05:40.425789 kernel: ACPI: Early table checksum verification disabled Jul 12 00:05:40.425795 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:05:40.425802 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425808 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425816 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:05:40.425822 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425830 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425836 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425842 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425850 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425857 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425863 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:05:40.425870 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425876 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:05:40.425883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:05:40.425908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:05:40.425915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:05:40.425922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:05:40.425928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:05:40.425934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:05:40.425943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:05:40.425949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:05:40.425955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:05:40.425962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:05:40.425968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:05:40.425974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:05:40.425981 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:05:40.425987 kernel: Zone ranges: Jul 12 00:05:40.425993 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:05:40.426000 kernel: DMA32 empty Jul 12 00:05:40.426006 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:40.426012 kernel: Movable zone start for each node Jul 12 00:05:40.426023 kernel: Early memory node ranges Jul 12 00:05:40.426030 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:05:40.426037 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:05:40.426043 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:05:40.426050 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:05:40.426058 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:05:40.426065 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:05:40.426071 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:40.426078 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:05:40.426085 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:05:40.426092 kernel: psci: probing for conduit method from ACPI. Jul 12 00:05:40.426098 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:05:40.426105 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:05:40.426111 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:05:40.426118 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:05:40.426125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:05:40.426131 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:05:40.426140 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:05:40.426146 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:05:40.426153 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:05:40.426160 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:05:40.426167 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:05:40.426173 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:05:40.426180 kernel: CPU features: detected: Spectre-BHB Jul 12 00:05:40.426187 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:05:40.426194 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:05:40.426200 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:05:40.426207 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:05:40.426215 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:05:40.426222 kernel: alternatives: applying boot alternatives Jul 12 00:05:40.426230 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:40.426237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:05:40.426244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:05:40.426251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:05:40.426258 kernel: Fallback order for Node 0: 0 Jul 12 00:05:40.426264 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:05:40.426271 kernel: Policy zone: Normal Jul 12 00:05:40.426278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:05:40.426285 kernel: software IO TLB: area num 2. Jul 12 00:05:40.426293 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:05:40.426300 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:05:40.426307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:05:40.426314 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:05:40.426321 kernel: rcu: RCU event tracing is enabled. Jul 12 00:05:40.426328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:05:40.426335 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:05:40.426342 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:05:40.426349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:05:40.426356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:05:40.426362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:05:40.426370 kernel: GICv3: 960 SPIs implemented Jul 12 00:05:40.426377 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:05:40.426384 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:05:40.426390 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:05:40.426397 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:05:40.426404 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:05:40.426411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:05:40.426417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:40.426424 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:05:40.426431 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:05:40.426438 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:05:40.426446 kernel: Console: colour dummy device 80x25 Jul 12 00:05:40.426454 kernel: printk: console [tty1] enabled Jul 12 00:05:40.426461 kernel: ACPI: Core revision 20230628 Jul 12 00:05:40.426468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:05:40.426474 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:05:40.426481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:05:40.426488 kernel: landlock: Up and running. Jul 12 00:05:40.426495 kernel: SELinux: Initializing. Jul 12 00:05:40.426502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.426509 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.426518 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:40.426525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:40.426532 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:05:40.426539 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:05:40.426546 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:05:40.426553 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:05:40.426560 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:05:40.426574 kernel: Remapping and enabling EFI services. Jul 12 00:05:40.426581 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:05:40.426588 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:05:40.426595 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:05:40.426604 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:40.426611 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:05:40.426618 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:05:40.426626 kernel: SMP: Total of 2 processors activated. Jul 12 00:05:40.426633 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:05:40.426642 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:05:40.426649 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:05:40.426656 kernel: CPU features: detected: CRC32 instructions Jul 12 00:05:40.426664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:05:40.426671 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:05:40.426678 kernel: CPU features: detected: Privileged Access Never Jul 12 00:05:40.426691 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:05:40.426698 kernel: alternatives: applying system-wide alternatives Jul 12 00:05:40.426705 kernel: devtmpfs: initialized Jul 12 00:05:40.426714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:05:40.426721 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:05:40.426728 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:05:40.426735 kernel: SMBIOS 3.1.0 present. Jul 12 00:05:40.426743 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:05:40.426750 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:05:40.426757 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:05:40.426765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:05:40.426772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:05:40.426781 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:05:40.426789 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:05:40.426796 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:05:40.426803 kernel: cpuidle: using governor menu Jul 12 00:05:40.426811 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:05:40.426818 kernel: ASID allocator initialised with 32768 entries Jul 12 00:05:40.426826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:05:40.426833 kernel: Serial: AMBA PL011 UART driver Jul 12 00:05:40.426840 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:05:40.426849 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:05:40.426857 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:05:40.426864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:05:40.426871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:05:40.426878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:05:40.426900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:05:40.426908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:05:40.426915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:05:40.426922 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:05:40.426932 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:05:40.426939 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:05:40.426946 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:05:40.426953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:05:40.426961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:05:40.426968 kernel: ACPI: Interpreter enabled Jul 12 00:05:40.426976 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:05:40.426983 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:05:40.426990 kernel: printk: console [ttyAMA0] enabled Jul 12 00:05:40.426999 kernel: printk: bootconsole [pl11] disabled Jul 12 00:05:40.427006 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:05:40.427013 kernel: iommu: Default domain type: Translated Jul 12 00:05:40.427021 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:05:40.427028 kernel: efivars: Registered efivars operations Jul 12 00:05:40.427035 kernel: vgaarb: loaded Jul 12 00:05:40.427042 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:05:40.427050 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:05:40.427057 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:05:40.427066 kernel: pnp: PnP ACPI init Jul 12 00:05:40.427073 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:05:40.427080 kernel: NET: Registered PF_INET protocol family Jul 12 00:05:40.427088 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:05:40.427095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:05:40.427102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:05:40.427109 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:05:40.427117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:05:40.427124 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:05:40.427133 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.427140 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.427148 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:05:40.427155 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:05:40.427162 kernel: kvm [1]: HYP mode not available Jul 12 00:05:40.427169 kernel: Initialise system trusted keyrings Jul 12 00:05:40.427176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:05:40.427183 kernel: Key type asymmetric registered Jul 12 00:05:40.427190 kernel: Asymmetric key parser 'x509' registered Jul 12 00:05:40.427199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:05:40.427206 kernel: io scheduler mq-deadline registered Jul 12 00:05:40.427213 kernel: io scheduler kyber registered Jul 12 00:05:40.427220 kernel: io scheduler bfq registered Jul 12 00:05:40.427228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:05:40.427235 kernel: thunder_xcv, ver 1.0 Jul 12 00:05:40.427242 kernel: thunder_bgx, ver 1.0 Jul 12 00:05:40.427249 kernel: nicpf, ver 1.0 Jul 12 00:05:40.427256 kernel: nicvf, ver 1.0 Jul 12 00:05:40.427419 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:05:40.427495 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:05:39 UTC (1752278739) Jul 12 00:05:40.427506 kernel: efifb: probing for efifb Jul 12 00:05:40.427513 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:05:40.427520 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:05:40.427528 kernel: efifb: scrolling: redraw Jul 12 00:05:40.427535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:05:40.427542 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:05:40.427551 kernel: fb0: EFI VGA frame buffer device Jul 12 00:05:40.427559 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:05:40.427566 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:05:40.427573 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:05:40.427580 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:05:40.427587 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:05:40.427595 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:05:40.427602 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:05:40.427609 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:05:40.427618 kernel: Segment Routing with IPv6 Jul 12 00:05:40.427625 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:05:40.427633 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:05:40.427640 kernel: Key type dns_resolver registered Jul 12 00:05:40.427647 kernel: registered taskstats version 1 Jul 12 00:05:40.427654 kernel: Loading compiled-in X.509 certificates Jul 12 00:05:40.427661 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:05:40.427668 kernel: Key type .fscrypt registered Jul 12 00:05:40.427675 kernel: Key type fscrypt-provisioning registered Jul 12 00:05:40.427684 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:05:40.427692 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:05:40.427699 kernel: ima: No architecture policies found Jul 12 00:05:40.427706 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:05:40.427713 kernel: clk: Disabling unused clocks Jul 12 00:05:40.427720 kernel: Freeing unused kernel memory: 39424K Jul 12 00:05:40.427728 kernel: Run /init as init process Jul 12 00:05:40.427735 kernel: with arguments: Jul 12 00:05:40.427742 kernel: /init Jul 12 00:05:40.427751 kernel: with environment: Jul 12 00:05:40.427758 kernel: HOME=/ Jul 12 00:05:40.427765 kernel: TERM=linux Jul 12 00:05:40.427773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:05:40.427782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:40.427791 systemd[1]: Detected virtualization microsoft. Jul 12 00:05:40.427799 systemd[1]: Detected architecture arm64. Jul 12 00:05:40.427807 systemd[1]: Running in initrd. Jul 12 00:05:40.427816 systemd[1]: No hostname configured, using default hostname. Jul 12 00:05:40.427824 systemd[1]: Hostname set to . Jul 12 00:05:40.427832 systemd[1]: Initializing machine ID from random generator. Jul 12 00:05:40.427840 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:05:40.427848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:40.427855 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:40.427864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:05:40.427872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:40.427882 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:05:40.427931 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:05:40.427941 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:05:40.427949 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:05:40.427957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:40.427965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:40.427975 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:05:40.427983 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:40.427991 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:40.427999 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:05:40.428006 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:40.428015 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:40.428022 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:40.428030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:40.428038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:40.428048 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:40.428056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:40.428064 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:05:40.428072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:05:40.428080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:40.428088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:05:40.428096 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:05:40.428104 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:40.428111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:40.428139 systemd-journald[217]: Collecting audit messages is disabled. Jul 12 00:05:40.428159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:40.428168 systemd-journald[217]: Journal started Jul 12 00:05:40.428189 systemd-journald[217]: Runtime Journal (/run/log/journal/a4ffaf8eae284d45ae89a1808b98c7f6) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:05:40.434809 systemd-modules-load[218]: Inserted module 'overlay' Jul 12 00:05:40.473738 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:40.473806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:05:40.457608 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:40.496820 kernel: Bridge firewalling registered Jul 12 00:05:40.481477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:40.504019 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 12 00:05:40.513210 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:05:40.524334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:40.537177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:40.565464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:40.575098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:40.597105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:40.637803 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:40.646932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:40.663301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:40.680696 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:40.695668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:40.724161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:05:40.742094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:40.769954 dracut-cmdline[250]: dracut-dracut-053 Jul 12 00:05:40.769954 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:40.777502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:40.849183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:40.878080 kernel: SCSI subsystem initialized Jul 12 00:05:40.878108 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:05:40.859538 systemd-resolved[255]: Positive Trust Anchors: Jul 12 00:05:40.892830 kernel: iscsi: registered transport (tcp) Jul 12 00:05:40.859548 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:40.859580 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:40.861879 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 12 00:05:40.962642 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:05:40.962667 kernel: QLogic iSCSI HBA Driver Jul 12 00:05:40.864452 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:40.888621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:41.006343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:41.024235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:05:41.056909 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:05:41.056967 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:05:41.056979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:05:41.113924 kernel: raid6: neonx8 gen() 15782 MB/s Jul 12 00:05:41.131904 kernel: raid6: neonx4 gen() 15675 MB/s Jul 12 00:05:41.151899 kernel: raid6: neonx2 gen() 13227 MB/s Jul 12 00:05:41.173905 kernel: raid6: neonx1 gen() 10480 MB/s Jul 12 00:05:41.193897 kernel: raid6: int64x8 gen() 6966 MB/s Jul 12 00:05:41.213902 kernel: raid6: int64x4 gen() 7350 MB/s Jul 12 00:05:41.234907 kernel: raid6: int64x2 gen() 6131 MB/s Jul 12 00:05:41.259177 kernel: raid6: int64x1 gen() 5061 MB/s Jul 12 00:05:41.259195 kernel: raid6: using algorithm neonx8 gen() 15782 MB/s Jul 12 00:05:41.285308 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 12 00:05:41.285336 kernel: raid6: using neon recovery algorithm Jul 12 00:05:41.294902 kernel: xor: measuring software checksum speed Jul 12 00:05:41.302267 kernel: 8regs : 18675 MB/sec Jul 12 00:05:41.302279 kernel: 32regs : 19585 MB/sec Jul 12 00:05:41.306068 kernel: arm64_neon : 27052 MB/sec Jul 12 00:05:41.310056 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:05:41.359906 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:05:41.370515 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:41.385061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:41.420991 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 12 00:05:41.426450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:41.444015 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:05:41.461914 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 12 00:05:41.490748 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:41.506158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:41.548245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:41.571430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:05:41.601089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:41.611744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:41.656468 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:05:41.625915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:41.645980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:41.695991 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:05:41.696019 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:05:41.696031 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:05:41.696042 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:05:41.689117 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:05:41.709337 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 12 00:05:41.725617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:41.755961 kernel: scsi host1: storvsc_host_t Jul 12 00:05:41.756172 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:05:41.756185 kernel: scsi host0: storvsc_host_t Jul 12 00:05:41.756207 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 12 00:05:41.725797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:41.792570 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:05:41.792615 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:05:41.792769 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:05:41.748828 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:41.815594 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:05:41.785235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:41.785555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.807404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.845055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.875224 kernel: PTP clock support registered Jul 12 00:05:41.875252 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:05:41.875262 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:05:41.875271 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:05:41.868648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:41.824963 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:05:41.841004 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:05:41.841021 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: VF slot 1 added Jul 12 00:05:41.841164 systemd-journald[217]: Time jumped backwards, rotating. Jul 12 00:05:41.798749 systemd-resolved[255]: Clock change detected. Flushing caches. Jul 12 00:05:41.885280 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:05:41.885302 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:05:41.885503 kernel: hv_pci e1709129-ede0-4bc0-9626-b1f967dcc4c3: PCI VMBus probing: Using version 0x10004 Jul 12 00:05:41.885613 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:05:41.885626 kernel: hv_pci e1709129-ede0-4bc0-9626-b1f967dcc4c3: PCI host bridge to bus ede0:00 Jul 12 00:05:41.885706 kernel: pci_bus ede0:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:05:41.816738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:41.899020 kernel: pci_bus ede0:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:05:41.816828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.907609 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:05:41.834505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.919238 kernel: pci ede0:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:05:41.922135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.966319 kernel: pci ede0:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:41.966370 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:05:41.966540 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:05:41.966626 kernel: pci ede0:00:02.0: enabling Extended Tags Jul 12 00:05:41.966644 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:05:41.960517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:41.989428 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:05:41.989771 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:05:42.009861 kernel: pci ede0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ede0:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:05:42.010048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:42.010059 kernel: pci_bus ede0:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:05:42.019302 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:05:42.019481 kernel: pci ede0:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:42.037549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:42.082452 kernel: mlx5_core ede0:00:02.0: enabling device (0000 -> 0002) Jul 12 00:05:42.089224 kernel: mlx5_core ede0:00:02.0: firmware version: 16.30.1284 Jul 12 00:05:42.296985 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: VF registering: eth1 Jul 12 00:05:42.297194 kernel: mlx5_core ede0:00:02.0 eth1: joined to eth0 Jul 12 00:05:42.305054 kernel: mlx5_core ede0:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:05:42.316257 kernel: mlx5_core ede0:00:02.0 enP60896s1: renamed from eth1 Jul 12 00:05:42.572264 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (488) Jul 12 00:05:42.586267 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (497) Jul 12 00:05:42.587516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:05:42.620057 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:05:42.642963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:05:42.650396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:05:42.676062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:05:42.696465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:05:42.721708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:42.729226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:43.740716 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:43.740787 disk-uuid[607]: The operation has completed successfully. Jul 12 00:05:43.803111 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:05:43.803240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:05:43.835344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:05:43.848778 sh[693]: Success Jul 12 00:05:43.880281 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:05:44.059658 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:05:44.087361 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:05:44.098265 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:05:44.132987 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:05:44.133042 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:44.140283 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:05:44.145741 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:05:44.150421 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:05:44.468620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:05:44.475300 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:05:44.499508 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:05:44.511787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:05:44.548884 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:44.548949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:44.554414 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:44.607127 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:44.614342 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:05:44.627718 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:44.633077 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:05:44.649846 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:05:44.674713 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:44.692375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:05:44.726344 systemd-networkd[877]: lo: Link UP Jul 12 00:05:44.726354 systemd-networkd[877]: lo: Gained carrier Jul 12 00:05:44.728065 systemd-networkd[877]: Enumeration completed Jul 12 00:05:44.728181 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:05:44.737823 systemd[1]: Reached target network.target - Network. Jul 12 00:05:44.742086 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:44.742091 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:05:44.830234 kernel: mlx5_core ede0:00:02.0 enP60896s1: Link up Jul 12 00:05:44.871410 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: Data path switched to VF: enP60896s1 Jul 12 00:05:44.871686 systemd-networkd[877]: enP60896s1: Link UP Jul 12 00:05:44.872362 systemd-networkd[877]: eth0: Link UP Jul 12 00:05:44.872504 systemd-networkd[877]: eth0: Gained carrier Jul 12 00:05:44.872515 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:44.899787 systemd-networkd[877]: enP60896s1: Gained carrier Jul 12 00:05:44.914257 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:05:46.004217 ignition[852]: Ignition 2.19.0 Jul 12 00:05:46.004229 ignition[852]: Stage: fetch-offline Jul 12 00:05:46.006263 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:46.004265 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.025529 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:05:46.004273 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.037365 systemd-networkd[877]: enP60896s1: Gained IPv6LL Jul 12 00:05:46.004384 ignition[852]: parsed url from cmdline: "" Jul 12 00:05:46.037538 systemd-networkd[877]: eth0: Gained IPv6LL Jul 12 00:05:46.004387 ignition[852]: no config URL provided Jul 12 00:05:46.004392 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:46.004399 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:46.004404 ignition[852]: failed to fetch config: resource requires networking Jul 12 00:05:46.004573 ignition[852]: Ignition finished successfully Jul 12 00:05:46.052692 ignition[885]: Ignition 2.19.0 Jul 12 00:05:46.052699 ignition[885]: Stage: fetch Jul 12 00:05:46.052898 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.052916 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.053026 ignition[885]: parsed url from cmdline: "" Jul 12 00:05:46.053029 ignition[885]: no config URL provided Jul 12 00:05:46.053034 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:46.053040 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:46.053069 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:05:46.160455 ignition[885]: GET result: OK Jul 12 00:05:46.160545 ignition[885]: config has been read from IMDS userdata Jul 12 00:05:46.160588 ignition[885]: parsing config with SHA512: 5a1dd3cb007a0ba5567be0e44f73567d9d659f77675f8b233eaac17bdbc124e77ab420c091f6159d84eb61f7345306d9743977e39e6548f189193694b67232cd Jul 12 00:05:46.168114 unknown[885]: fetched base config from "system" Jul 12 00:05:46.168878 ignition[885]: fetch: fetch complete Jul 12 00:05:46.168453 unknown[885]: fetched base config from "system" Jul 12 00:05:46.168883 ignition[885]: fetch: fetch passed Jul 12 00:05:46.168477 unknown[885]: fetched user config from "azure" Jul 12 00:05:46.168936 ignition[885]: Ignition finished successfully Jul 12 00:05:46.174034 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:05:46.196542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:05:46.217406 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:05:46.214291 ignition[892]: Ignition 2.19.0 Jul 12 00:05:46.214298 ignition[892]: Stage: kargs Jul 12 00:05:46.214462 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.214472 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.215421 ignition[892]: kargs: kargs passed Jul 12 00:05:46.241508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:05:46.215478 ignition[892]: Ignition finished successfully Jul 12 00:05:46.268631 ignition[899]: Ignition 2.19.0 Jul 12 00:05:46.268649 ignition[899]: Stage: disks Jul 12 00:05:46.273005 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:05:46.268835 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.280749 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:46.268844 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.291661 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:05:46.269796 ignition[899]: disks: disks passed Jul 12 00:05:46.304270 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:05:46.269855 ignition[899]: Ignition finished successfully Jul 12 00:05:46.316286 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:05:46.327815 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:05:46.352501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:05:46.514352 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:05:46.522639 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:05:46.540342 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:05:46.597227 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:05:46.597448 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:05:46.602752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:05:46.705288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:46.717114 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:05:46.725390 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:05:46.739762 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:05:46.739801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:46.756248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:05:46.802283 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (918) Jul 12 00:05:46.802331 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:46.802898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:05:46.816765 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:46.828532 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:46.836224 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:46.837570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:48.412748 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:05:48.472627 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:05:48.481346 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:05:48.495432 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:05:48.504900 coreos-metadata[920]: Jul 12 00:05:48.504 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:05:48.515586 coreos-metadata[920]: Jul 12 00:05:48.515 INFO Fetch successful Jul 12 00:05:48.522275 coreos-metadata[920]: Jul 12 00:05:48.520 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:05:48.534924 coreos-metadata[920]: Jul 12 00:05:48.534 INFO Fetch successful Jul 12 00:05:48.541072 coreos-metadata[920]: Jul 12 00:05:48.536 INFO wrote hostname ci-4081.3.4-n-ddca76aad7 to /sysroot/etc/hostname Jul 12 00:05:48.541624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:52.050376 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:52.068452 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:05:52.077395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:05:52.097252 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:52.098354 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:05:52.121398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:05:52.128770 ignition[1037]: INFO : Ignition 2.19.0 Jul 12 00:05:52.128770 ignition[1037]: INFO : Stage: mount Jul 12 00:05:52.128770 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:52.128770 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:52.128770 ignition[1037]: INFO : mount: mount passed Jul 12 00:05:52.128770 ignition[1037]: INFO : Ignition finished successfully Jul 12 00:05:52.132755 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:05:52.154320 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:05:52.170585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:52.208226 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1050) Jul 12 00:05:52.225174 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:52.225223 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:52.225235 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:52.231221 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:52.232893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:52.258619 ignition[1067]: INFO : Ignition 2.19.0 Jul 12 00:05:52.258619 ignition[1067]: INFO : Stage: files Jul 12 00:05:52.266918 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:52.266918 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:52.266918 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:05:52.285703 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:05:52.285703 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:05:52.349804 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:05:52.350182 unknown[1067]: wrote ssh authorized keys file for user: core Jul 12 00:05:52.404524 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:05:52.545139 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:05:53.183852 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:05:54.047802 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:54.047802 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:05:54.096449 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: files passed Jul 12 00:05:54.109680 ignition[1067]: INFO : Ignition finished successfully Jul 12 00:05:54.109988 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:05:54.144541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:05:54.159425 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:05:54.213523 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:05:54.213629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:05:54.318475 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.318475 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.337902 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.329872 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:54.345636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:05:54.376493 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:05:54.412004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:05:54.412116 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:05:54.420793 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:05:54.433444 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:05:54.447805 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:05:54.466474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:05:54.499146 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:54.517553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:05:54.536548 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:54.550248 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:54.557472 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:05:54.569337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:05:54.569512 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:54.587058 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:05:54.600075 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:05:54.611273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:05:54.622831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:54.636349 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:54.649415 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:05:54.662291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:54.676125 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:05:54.689801 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:05:54.702416 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:05:54.713050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:05:54.713240 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:54.730342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:54.743168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:54.756706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:05:54.756818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:54.773355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:05:54.773532 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:54.793993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:05:54.794176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:54.807828 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:05:54.807972 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:05:54.819762 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:05:54.819908 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:54.854375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:05:54.874838 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:05:54.894573 ignition[1119]: INFO : Ignition 2.19.0 Jul 12 00:05:54.894573 ignition[1119]: INFO : Stage: umount Jul 12 00:05:54.894573 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:54.894573 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:54.894573 ignition[1119]: INFO : umount: umount passed Jul 12 00:05:54.894573 ignition[1119]: INFO : Ignition finished successfully Jul 12 00:05:54.875138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:54.906067 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:05:54.915571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:05:54.915824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:54.923348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:05:54.923521 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:54.943351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:05:54.944248 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:05:54.944361 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:05:54.963385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:05:54.963513 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:05:54.971834 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:05:54.971883 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:05:54.978428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:05:54.978492 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:05:54.988954 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:05:54.989005 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:05:55.001434 systemd[1]: Stopped target network.target - Network. Jul 12 00:05:55.011629 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:05:55.011690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:55.027394 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:05:55.039612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:05:55.045243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:55.052889 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:05:55.065680 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:05:55.076583 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:05:55.076646 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:55.087653 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:05:55.087705 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:55.099065 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:05:55.099135 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:05:55.110749 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:05:55.110837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:55.123887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:05:55.135958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:05:55.395418 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: Data path switched from VF: enP60896s1 Jul 12 00:05:55.148259 systemd-networkd[877]: eth0: DHCPv6 lease lost Jul 12 00:05:55.149605 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:05:55.149694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:05:55.166657 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:05:55.166756 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:05:55.182682 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:05:55.182948 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:05:55.195948 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:05:55.196014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:55.208161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:05:55.208250 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:55.234425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:05:55.245087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:05:55.245168 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:55.261280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:05:55.261334 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:55.272930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:05:55.272983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:55.284374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:05:55.284425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:55.296866 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:55.328118 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:05:55.328396 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:55.341221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:05:55.341274 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:55.354156 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:05:55.354190 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:55.375599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:05:55.375666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:55.395500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:05:55.395565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:55.406437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:55.406501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:55.444515 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:05:55.691264 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 12 00:05:55.460725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:05:55.460817 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:55.474589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:55.474662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:55.490070 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:05:55.490229 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:05:55.504464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:05:55.504635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:05:55.521537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:05:55.548570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:05:55.568424 systemd[1]: Switching root. Jul 12 00:05:55.758631 systemd-journald[217]: Journal stopped Jul 12 00:05:40.425715 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:05:40.425741 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:05:40.425749 kernel: KASLR enabled Jul 12 00:05:40.425755 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:05:40.425763 kernel: printk: bootconsole [pl11] enabled Jul 12 00:05:40.425769 kernel: efi: EFI v2.7 by EDK II Jul 12 00:05:40.425777 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:05:40.425783 kernel: random: crng init done Jul 12 00:05:40.425789 kernel: ACPI: Early table checksum verification disabled Jul 12 00:05:40.425795 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:05:40.425802 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425808 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425816 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:05:40.425822 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425830 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425836 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425842 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425850 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425857 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425863 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:05:40.425870 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:40.425876 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:05:40.425883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:05:40.425908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:05:40.425915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:05:40.425922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:05:40.425928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:05:40.425934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:05:40.425943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:05:40.425949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:05:40.425955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:05:40.425962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:05:40.425968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:05:40.425974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:05:40.425981 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:05:40.425987 kernel: Zone ranges: Jul 12 00:05:40.425993 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:05:40.426000 kernel: DMA32 empty Jul 12 00:05:40.426006 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:40.426012 kernel: Movable zone start for each node Jul 12 00:05:40.426023 kernel: Early memory node ranges Jul 12 00:05:40.426030 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:05:40.426037 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:05:40.426043 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:05:40.426050 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:05:40.426058 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:05:40.426065 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:05:40.426071 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:40.426078 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:05:40.426085 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:05:40.426092 kernel: psci: probing for conduit method from ACPI. Jul 12 00:05:40.426098 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:05:40.426105 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:05:40.426111 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:05:40.426118 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:05:40.426125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:05:40.426131 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:05:40.426140 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:05:40.426146 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:05:40.426153 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:05:40.426160 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:05:40.426167 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:05:40.426173 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:05:40.426180 kernel: CPU features: detected: Spectre-BHB Jul 12 00:05:40.426187 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:05:40.426194 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:05:40.426200 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:05:40.426207 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:05:40.426215 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:05:40.426222 kernel: alternatives: applying boot alternatives Jul 12 00:05:40.426230 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:40.426237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:05:40.426244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:05:40.426251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:05:40.426258 kernel: Fallback order for Node 0: 0 Jul 12 00:05:40.426264 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:05:40.426271 kernel: Policy zone: Normal Jul 12 00:05:40.426278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:05:40.426285 kernel: software IO TLB: area num 2. Jul 12 00:05:40.426293 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:05:40.426300 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:05:40.426307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:05:40.426314 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:05:40.426321 kernel: rcu: RCU event tracing is enabled. Jul 12 00:05:40.426328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:05:40.426335 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:05:40.426342 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:05:40.426349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:05:40.426356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:05:40.426362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:05:40.426370 kernel: GICv3: 960 SPIs implemented Jul 12 00:05:40.426377 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:05:40.426384 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:05:40.426390 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:05:40.426397 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:05:40.426404 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:05:40.426411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:05:40.426417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:40.426424 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:05:40.426431 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:05:40.426438 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:05:40.426446 kernel: Console: colour dummy device 80x25 Jul 12 00:05:40.426454 kernel: printk: console [tty1] enabled Jul 12 00:05:40.426461 kernel: ACPI: Core revision 20230628 Jul 12 00:05:40.426468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:05:40.426474 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:05:40.426481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:05:40.426488 kernel: landlock: Up and running. Jul 12 00:05:40.426495 kernel: SELinux: Initializing. Jul 12 00:05:40.426502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.426509 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.426518 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:40.426525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:40.426532 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:05:40.426539 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:05:40.426546 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:05:40.426553 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:05:40.426560 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:05:40.426574 kernel: Remapping and enabling EFI services. Jul 12 00:05:40.426581 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:05:40.426588 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:05:40.426595 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:05:40.426604 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:40.426611 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:05:40.426618 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:05:40.426626 kernel: SMP: Total of 2 processors activated. Jul 12 00:05:40.426633 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:05:40.426642 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:05:40.426649 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:05:40.426656 kernel: CPU features: detected: CRC32 instructions Jul 12 00:05:40.426664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:05:40.426671 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:05:40.426678 kernel: CPU features: detected: Privileged Access Never Jul 12 00:05:40.426691 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:05:40.426698 kernel: alternatives: applying system-wide alternatives Jul 12 00:05:40.426705 kernel: devtmpfs: initialized Jul 12 00:05:40.426714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:05:40.426721 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:05:40.426728 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:05:40.426735 kernel: SMBIOS 3.1.0 present. Jul 12 00:05:40.426743 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:05:40.426750 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:05:40.426757 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:05:40.426765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:05:40.426772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:05:40.426781 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:05:40.426789 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:05:40.426796 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:05:40.426803 kernel: cpuidle: using governor menu Jul 12 00:05:40.426811 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:05:40.426818 kernel: ASID allocator initialised with 32768 entries Jul 12 00:05:40.426826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:05:40.426833 kernel: Serial: AMBA PL011 UART driver Jul 12 00:05:40.426840 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:05:40.426849 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:05:40.426857 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:05:40.426864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:05:40.426871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:05:40.426878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:05:40.426900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:05:40.426908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:05:40.426915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:05:40.426922 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:05:40.426932 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:05:40.426939 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:05:40.426946 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:05:40.426953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:05:40.426961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:05:40.426968 kernel: ACPI: Interpreter enabled Jul 12 00:05:40.426976 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:05:40.426983 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:05:40.426990 kernel: printk: console [ttyAMA0] enabled Jul 12 00:05:40.426999 kernel: printk: bootconsole [pl11] disabled Jul 12 00:05:40.427006 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:05:40.427013 kernel: iommu: Default domain type: Translated Jul 12 00:05:40.427021 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:05:40.427028 kernel: efivars: Registered efivars operations Jul 12 00:05:40.427035 kernel: vgaarb: loaded Jul 12 00:05:40.427042 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:05:40.427050 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:05:40.427057 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:05:40.427066 kernel: pnp: PnP ACPI init Jul 12 00:05:40.427073 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:05:40.427080 kernel: NET: Registered PF_INET protocol family Jul 12 00:05:40.427088 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:05:40.427095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:05:40.427102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:05:40.427109 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:05:40.427117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:05:40.427124 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:05:40.427133 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.427140 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:40.427148 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:05:40.427155 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:05:40.427162 kernel: kvm [1]: HYP mode not available Jul 12 00:05:40.427169 kernel: Initialise system trusted keyrings Jul 12 00:05:40.427176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:05:40.427183 kernel: Key type asymmetric registered Jul 12 00:05:40.427190 kernel: Asymmetric key parser 'x509' registered Jul 12 00:05:40.427199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:05:40.427206 kernel: io scheduler mq-deadline registered Jul 12 00:05:40.427213 kernel: io scheduler kyber registered Jul 12 00:05:40.427220 kernel: io scheduler bfq registered Jul 12 00:05:40.427228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:05:40.427235 kernel: thunder_xcv, ver 1.0 Jul 12 00:05:40.427242 kernel: thunder_bgx, ver 1.0 Jul 12 00:05:40.427249 kernel: nicpf, ver 1.0 Jul 12 00:05:40.427256 kernel: nicvf, ver 1.0 Jul 12 00:05:40.427419 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:05:40.427495 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:05:39 UTC (1752278739) Jul 12 00:05:40.427506 kernel: efifb: probing for efifb Jul 12 00:05:40.427513 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:05:40.427520 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:05:40.427528 kernel: efifb: scrolling: redraw Jul 12 00:05:40.427535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:05:40.427542 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:05:40.427551 kernel: fb0: EFI VGA frame buffer device Jul 12 00:05:40.427559 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:05:40.427566 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:05:40.427573 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:05:40.427580 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:05:40.427587 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:05:40.427595 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:05:40.427602 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:05:40.427609 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:05:40.427618 kernel: Segment Routing with IPv6 Jul 12 00:05:40.427625 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:05:40.427633 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:05:40.427640 kernel: Key type dns_resolver registered Jul 12 00:05:40.427647 kernel: registered taskstats version 1 Jul 12 00:05:40.427654 kernel: Loading compiled-in X.509 certificates Jul 12 00:05:40.427661 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:05:40.427668 kernel: Key type .fscrypt registered Jul 12 00:05:40.427675 kernel: Key type fscrypt-provisioning registered Jul 12 00:05:40.427684 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:05:40.427692 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:05:40.427699 kernel: ima: No architecture policies found Jul 12 00:05:40.427706 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:05:40.427713 kernel: clk: Disabling unused clocks Jul 12 00:05:40.427720 kernel: Freeing unused kernel memory: 39424K Jul 12 00:05:40.427728 kernel: Run /init as init process Jul 12 00:05:40.427735 kernel: with arguments: Jul 12 00:05:40.427742 kernel: /init Jul 12 00:05:40.427751 kernel: with environment: Jul 12 00:05:40.427758 kernel: HOME=/ Jul 12 00:05:40.427765 kernel: TERM=linux Jul 12 00:05:40.427773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:05:40.427782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:40.427791 systemd[1]: Detected virtualization microsoft. Jul 12 00:05:40.427799 systemd[1]: Detected architecture arm64. Jul 12 00:05:40.427807 systemd[1]: Running in initrd. Jul 12 00:05:40.427816 systemd[1]: No hostname configured, using default hostname. Jul 12 00:05:40.427824 systemd[1]: Hostname set to . Jul 12 00:05:40.427832 systemd[1]: Initializing machine ID from random generator. Jul 12 00:05:40.427840 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:05:40.427848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:40.427855 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:40.427864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:05:40.427872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:40.427882 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:05:40.427931 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:05:40.427941 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:05:40.427949 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:05:40.427957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:40.427965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:40.427975 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:05:40.427983 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:40.427991 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:40.427999 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:05:40.428006 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:40.428015 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:40.428022 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:40.428030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:40.428038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:40.428048 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:40.428056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:40.428064 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:05:40.428072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:05:40.428080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:40.428088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:05:40.428096 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:05:40.428104 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:40.428111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:40.428139 systemd-journald[217]: Collecting audit messages is disabled. Jul 12 00:05:40.428159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:40.428168 systemd-journald[217]: Journal started Jul 12 00:05:40.428189 systemd-journald[217]: Runtime Journal (/run/log/journal/a4ffaf8eae284d45ae89a1808b98c7f6) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:05:40.434809 systemd-modules-load[218]: Inserted module 'overlay' Jul 12 00:05:40.473738 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:40.473806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:05:40.457608 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:40.496820 kernel: Bridge firewalling registered Jul 12 00:05:40.481477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:40.504019 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 12 00:05:40.513210 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:05:40.524334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:40.537177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:40.565464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:40.575098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:40.597105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:40.637803 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:40.646932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:40.663301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:40.680696 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:40.695668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:40.724161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:05:40.742094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:40.769954 dracut-cmdline[250]: dracut-dracut-053 Jul 12 00:05:40.769954 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:40.777502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:40.849183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:40.878080 kernel: SCSI subsystem initialized Jul 12 00:05:40.878108 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:05:40.859538 systemd-resolved[255]: Positive Trust Anchors: Jul 12 00:05:40.892830 kernel: iscsi: registered transport (tcp) Jul 12 00:05:40.859548 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:40.859580 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:40.861879 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 12 00:05:40.962642 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:05:40.962667 kernel: QLogic iSCSI HBA Driver Jul 12 00:05:40.864452 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:40.888621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:41.006343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:41.024235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:05:41.056909 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:05:41.056967 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:05:41.056979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:05:41.113924 kernel: raid6: neonx8 gen() 15782 MB/s Jul 12 00:05:41.131904 kernel: raid6: neonx4 gen() 15675 MB/s Jul 12 00:05:41.151899 kernel: raid6: neonx2 gen() 13227 MB/s Jul 12 00:05:41.173905 kernel: raid6: neonx1 gen() 10480 MB/s Jul 12 00:05:41.193897 kernel: raid6: int64x8 gen() 6966 MB/s Jul 12 00:05:41.213902 kernel: raid6: int64x4 gen() 7350 MB/s Jul 12 00:05:41.234907 kernel: raid6: int64x2 gen() 6131 MB/s Jul 12 00:05:41.259177 kernel: raid6: int64x1 gen() 5061 MB/s Jul 12 00:05:41.259195 kernel: raid6: using algorithm neonx8 gen() 15782 MB/s Jul 12 00:05:41.285308 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 12 00:05:41.285336 kernel: raid6: using neon recovery algorithm Jul 12 00:05:41.294902 kernel: xor: measuring software checksum speed Jul 12 00:05:41.302267 kernel: 8regs : 18675 MB/sec Jul 12 00:05:41.302279 kernel: 32regs : 19585 MB/sec Jul 12 00:05:41.306068 kernel: arm64_neon : 27052 MB/sec Jul 12 00:05:41.310056 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:05:41.359906 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:05:41.370515 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:41.385061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:41.420991 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jul 12 00:05:41.426450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:41.444015 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:05:41.461914 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jul 12 00:05:41.490748 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:41.506158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:41.548245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:41.571430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:05:41.601089 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:41.611744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:41.656468 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:05:41.625915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:41.645980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:41.695991 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:05:41.696019 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:05:41.696031 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:05:41.696042 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:05:41.689117 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:05:41.709337 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 12 00:05:41.725617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:41.755961 kernel: scsi host1: storvsc_host_t Jul 12 00:05:41.756172 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:05:41.756185 kernel: scsi host0: storvsc_host_t Jul 12 00:05:41.756207 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 12 00:05:41.725797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:41.792570 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:05:41.792615 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:05:41.792769 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:05:41.748828 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:41.815594 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:05:41.785235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:41.785555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.807404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.845055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.875224 kernel: PTP clock support registered Jul 12 00:05:41.875252 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:05:41.875262 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:05:41.875271 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:05:41.868648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:41.824963 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:05:41.841004 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:05:41.841021 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: VF slot 1 added Jul 12 00:05:41.841164 systemd-journald[217]: Time jumped backwards, rotating. Jul 12 00:05:41.798749 systemd-resolved[255]: Clock change detected. Flushing caches. Jul 12 00:05:41.885280 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:05:41.885302 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:05:41.885503 kernel: hv_pci e1709129-ede0-4bc0-9626-b1f967dcc4c3: PCI VMBus probing: Using version 0x10004 Jul 12 00:05:41.885613 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:05:41.885626 kernel: hv_pci e1709129-ede0-4bc0-9626-b1f967dcc4c3: PCI host bridge to bus ede0:00 Jul 12 00:05:41.885706 kernel: pci_bus ede0:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:05:41.816738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:41.899020 kernel: pci_bus ede0:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:05:41.816828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.907609 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:05:41.834505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:41.919238 kernel: pci ede0:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:05:41.922135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:41.966319 kernel: pci ede0:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:41.966370 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:05:41.966540 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:05:41.966626 kernel: pci ede0:00:02.0: enabling Extended Tags Jul 12 00:05:41.966644 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:05:41.960517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:41.989428 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:05:41.989771 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:05:42.009861 kernel: pci ede0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ede0:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:05:42.010048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:42.010059 kernel: pci_bus ede0:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:05:42.019302 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:05:42.019481 kernel: pci ede0:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:42.037549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:42.082452 kernel: mlx5_core ede0:00:02.0: enabling device (0000 -> 0002) Jul 12 00:05:42.089224 kernel: mlx5_core ede0:00:02.0: firmware version: 16.30.1284 Jul 12 00:05:42.296985 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: VF registering: eth1 Jul 12 00:05:42.297194 kernel: mlx5_core ede0:00:02.0 eth1: joined to eth0 Jul 12 00:05:42.305054 kernel: mlx5_core ede0:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:05:42.316257 kernel: mlx5_core ede0:00:02.0 enP60896s1: renamed from eth1 Jul 12 00:05:42.572264 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (488) Jul 12 00:05:42.586267 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (497) Jul 12 00:05:42.587516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:05:42.620057 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:05:42.642963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:05:42.650396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:05:42.676062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:05:42.696465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:05:42.721708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:42.729226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:43.740716 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:43.740787 disk-uuid[607]: The operation has completed successfully. Jul 12 00:05:43.803111 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:05:43.803240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:05:43.835344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:05:43.848778 sh[693]: Success Jul 12 00:05:43.880281 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:05:44.059658 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:05:44.087361 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:05:44.098265 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:05:44.132987 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:05:44.133042 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:44.140283 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:05:44.145741 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:05:44.150421 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:05:44.468620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:05:44.475300 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:05:44.499508 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:05:44.511787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:05:44.548884 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:44.548949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:44.554414 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:44.607127 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:44.614342 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:05:44.627718 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:44.633077 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:05:44.649846 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:05:44.674713 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:44.692375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:05:44.726344 systemd-networkd[877]: lo: Link UP Jul 12 00:05:44.726354 systemd-networkd[877]: lo: Gained carrier Jul 12 00:05:44.728065 systemd-networkd[877]: Enumeration completed Jul 12 00:05:44.728181 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:05:44.737823 systemd[1]: Reached target network.target - Network. Jul 12 00:05:44.742086 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:44.742091 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:05:44.830234 kernel: mlx5_core ede0:00:02.0 enP60896s1: Link up Jul 12 00:05:44.871410 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: Data path switched to VF: enP60896s1 Jul 12 00:05:44.871686 systemd-networkd[877]: enP60896s1: Link UP Jul 12 00:05:44.872362 systemd-networkd[877]: eth0: Link UP Jul 12 00:05:44.872504 systemd-networkd[877]: eth0: Gained carrier Jul 12 00:05:44.872515 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:44.899787 systemd-networkd[877]: enP60896s1: Gained carrier Jul 12 00:05:44.914257 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:05:46.004217 ignition[852]: Ignition 2.19.0 Jul 12 00:05:46.004229 ignition[852]: Stage: fetch-offline Jul 12 00:05:46.006263 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:46.004265 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.025529 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:05:46.004273 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.037365 systemd-networkd[877]: enP60896s1: Gained IPv6LL Jul 12 00:05:46.004384 ignition[852]: parsed url from cmdline: "" Jul 12 00:05:46.037538 systemd-networkd[877]: eth0: Gained IPv6LL Jul 12 00:05:46.004387 ignition[852]: no config URL provided Jul 12 00:05:46.004392 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:46.004399 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:46.004404 ignition[852]: failed to fetch config: resource requires networking Jul 12 00:05:46.004573 ignition[852]: Ignition finished successfully Jul 12 00:05:46.052692 ignition[885]: Ignition 2.19.0 Jul 12 00:05:46.052699 ignition[885]: Stage: fetch Jul 12 00:05:46.052898 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.052916 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.053026 ignition[885]: parsed url from cmdline: "" Jul 12 00:05:46.053029 ignition[885]: no config URL provided Jul 12 00:05:46.053034 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:46.053040 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:46.053069 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:05:46.160455 ignition[885]: GET result: OK Jul 12 00:05:46.160545 ignition[885]: config has been read from IMDS userdata Jul 12 00:05:46.160588 ignition[885]: parsing config with SHA512: 5a1dd3cb007a0ba5567be0e44f73567d9d659f77675f8b233eaac17bdbc124e77ab420c091f6159d84eb61f7345306d9743977e39e6548f189193694b67232cd Jul 12 00:05:46.168114 unknown[885]: fetched base config from "system" Jul 12 00:05:46.168878 ignition[885]: fetch: fetch complete Jul 12 00:05:46.168453 unknown[885]: fetched base config from "system" Jul 12 00:05:46.168883 ignition[885]: fetch: fetch passed Jul 12 00:05:46.168477 unknown[885]: fetched user config from "azure" Jul 12 00:05:46.168936 ignition[885]: Ignition finished successfully Jul 12 00:05:46.174034 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:05:46.196542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:05:46.217406 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:05:46.214291 ignition[892]: Ignition 2.19.0 Jul 12 00:05:46.214298 ignition[892]: Stage: kargs Jul 12 00:05:46.214462 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.214472 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.215421 ignition[892]: kargs: kargs passed Jul 12 00:05:46.241508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:05:46.215478 ignition[892]: Ignition finished successfully Jul 12 00:05:46.268631 ignition[899]: Ignition 2.19.0 Jul 12 00:05:46.268649 ignition[899]: Stage: disks Jul 12 00:05:46.273005 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:05:46.268835 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:46.280749 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:46.268844 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:46.291661 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:05:46.269796 ignition[899]: disks: disks passed Jul 12 00:05:46.304270 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:05:46.269855 ignition[899]: Ignition finished successfully Jul 12 00:05:46.316286 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:05:46.327815 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:05:46.352501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:05:46.514352 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:05:46.522639 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:05:46.540342 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:05:46.597227 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:05:46.597448 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:05:46.602752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:05:46.705288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:46.717114 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:05:46.725390 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:05:46.739762 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:05:46.739801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:46.756248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:05:46.802283 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (918) Jul 12 00:05:46.802331 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:46.802898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:05:46.816765 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:46.828532 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:46.836224 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:46.837570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:48.412748 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:05:48.472627 initrd-setup-root[950]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:05:48.481346 initrd-setup-root[957]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:05:48.495432 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:05:48.504900 coreos-metadata[920]: Jul 12 00:05:48.504 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:05:48.515586 coreos-metadata[920]: Jul 12 00:05:48.515 INFO Fetch successful Jul 12 00:05:48.522275 coreos-metadata[920]: Jul 12 00:05:48.520 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:05:48.534924 coreos-metadata[920]: Jul 12 00:05:48.534 INFO Fetch successful Jul 12 00:05:48.541072 coreos-metadata[920]: Jul 12 00:05:48.536 INFO wrote hostname ci-4081.3.4-n-ddca76aad7 to /sysroot/etc/hostname Jul 12 00:05:48.541624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:52.050376 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:52.068452 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:05:52.077395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:05:52.097252 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:52.098354 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:05:52.121398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:05:52.128770 ignition[1037]: INFO : Ignition 2.19.0 Jul 12 00:05:52.128770 ignition[1037]: INFO : Stage: mount Jul 12 00:05:52.128770 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:52.128770 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:52.128770 ignition[1037]: INFO : mount: mount passed Jul 12 00:05:52.128770 ignition[1037]: INFO : Ignition finished successfully Jul 12 00:05:52.132755 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:05:52.154320 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:05:52.170585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:52.208226 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1050) Jul 12 00:05:52.225174 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:52.225223 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:52.225235 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:52.231221 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:52.232893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:52.258619 ignition[1067]: INFO : Ignition 2.19.0 Jul 12 00:05:52.258619 ignition[1067]: INFO : Stage: files Jul 12 00:05:52.266918 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:52.266918 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:52.266918 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:05:52.285703 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:05:52.285703 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:05:52.349804 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:05:52.357324 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:05:52.350182 unknown[1067]: wrote ssh authorized keys file for user: core Jul 12 00:05:52.404524 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:05:52.545139 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:52.556174 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:52.632722 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:05:53.183852 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:05:54.047802 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:05:54.047802 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:05:54.096449 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:54.109680 ignition[1067]: INFO : files: files passed Jul 12 00:05:54.109680 ignition[1067]: INFO : Ignition finished successfully Jul 12 00:05:54.109988 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:05:54.144541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:05:54.159425 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:05:54.213523 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:05:54.213629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:05:54.318475 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.318475 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.337902 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:54.329872 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:54.345636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:05:54.376493 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:05:54.412004 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:05:54.412116 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:05:54.420793 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:05:54.433444 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:05:54.447805 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:05:54.466474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:05:54.499146 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:54.517553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:05:54.536548 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:54.550248 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:54.557472 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:05:54.569337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:05:54.569512 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:54.587058 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:05:54.600075 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:05:54.611273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:05:54.622831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:54.636349 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:54.649415 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:05:54.662291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:54.676125 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:05:54.689801 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:05:54.702416 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:05:54.713050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:05:54.713240 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:54.730342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:54.743168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:54.756706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:05:54.756818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:54.773355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:05:54.773532 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:54.793993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:05:54.794176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:54.807828 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:05:54.807972 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:05:54.819762 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:05:54.819908 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:54.854375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:05:54.874838 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:05:54.894573 ignition[1119]: INFO : Ignition 2.19.0 Jul 12 00:05:54.894573 ignition[1119]: INFO : Stage: umount Jul 12 00:05:54.894573 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:54.894573 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:54.894573 ignition[1119]: INFO : umount: umount passed Jul 12 00:05:54.894573 ignition[1119]: INFO : Ignition finished successfully Jul 12 00:05:54.875138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:54.906067 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:05:54.915571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:05:54.915824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:54.923348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:05:54.923521 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:54.943351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:05:54.944248 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:05:54.944361 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:05:54.963385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:05:54.963513 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:05:54.971834 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:05:54.971883 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:05:54.978428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:05:54.978492 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:05:54.988954 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:05:54.989005 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:05:55.001434 systemd[1]: Stopped target network.target - Network. Jul 12 00:05:55.011629 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:05:55.011690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:55.027394 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:05:55.039612 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:05:55.045243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:55.052889 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:05:55.065680 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:05:55.076583 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:05:55.076646 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:55.087653 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:05:55.087705 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:55.099065 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:05:55.099135 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:05:55.110749 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:05:55.110837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:55.123887 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:05:55.135958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:05:55.395418 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: Data path switched from VF: enP60896s1 Jul 12 00:05:55.148259 systemd-networkd[877]: eth0: DHCPv6 lease lost Jul 12 00:05:55.149605 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:05:55.149694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:05:55.166657 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:05:55.166756 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:05:55.182682 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:05:55.182948 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:05:55.195948 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:05:55.196014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:55.208161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:05:55.208250 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:55.234425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:05:55.245087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:05:55.245168 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:55.261280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:05:55.261334 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:55.272930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:05:55.272983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:55.284374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:05:55.284425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:55.296866 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:55.328118 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:05:55.328396 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:55.341221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:05:55.341274 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:55.354156 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:05:55.354190 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:55.375599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:05:55.375666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:55.395500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:05:55.395565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:55.406437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:55.406501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:55.444515 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:05:55.691264 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 12 00:05:55.460725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:05:55.460817 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:55.474589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:55.474662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:55.490070 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:05:55.490229 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:05:55.504464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:05:55.504635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:05:55.521537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:05:55.548570 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:05:55.568424 systemd[1]: Switching root. Jul 12 00:05:55.758631 systemd-journald[217]: Journal stopped Jul 12 00:06:10.439787 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:06:10.439811 kernel: SELinux: policy capability open_perms=1 Jul 12 00:06:10.439821 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:06:10.439829 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:06:10.439838 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:06:10.439845 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:06:10.439854 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:06:10.439862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:06:10.439870 kernel: audit: type=1403 audit(1752278760.311:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:06:10.439880 systemd[1]: Successfully loaded SELinux policy in 1.115104s. Jul 12 00:06:10.439891 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.494ms. Jul 12 00:06:10.439902 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:10.439911 systemd[1]: Detected virtualization microsoft. Jul 12 00:06:10.439920 systemd[1]: Detected architecture arm64. Jul 12 00:06:10.439929 systemd[1]: Detected first boot. Jul 12 00:06:10.439940 systemd[1]: Hostname set to . Jul 12 00:06:10.439949 systemd[1]: Initializing machine ID from random generator. Jul 12 00:06:10.439957 zram_generator::config[1160]: No configuration found. Jul 12 00:06:10.439967 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:06:10.439976 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:06:10.439985 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:06:10.439994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:10.440004 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:06:10.440014 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:06:10.440023 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:06:10.440032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:06:10.440041 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:06:10.440050 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:06:10.440060 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:06:10.440071 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:06:10.440080 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:10.440089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:10.440099 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:06:10.440108 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:06:10.440118 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:06:10.440127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:10.440136 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:06:10.440147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:10.440156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:06:10.440165 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:06:10.440176 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:10.440185 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:06:10.440195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:10.440227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:10.440238 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:10.440250 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:10.440259 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:06:10.440269 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:06:10.440278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:10.440287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:10.440297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:10.440308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:06:10.440318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:06:10.440328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:06:10.440338 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:06:10.440347 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:06:10.440357 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:06:10.440366 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:06:10.440378 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:06:10.440388 systemd[1]: Reached target machines.target - Containers. Jul 12 00:06:10.440397 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:06:10.440407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:10.440416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:10.440426 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:06:10.440435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:10.440445 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:10.440455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:10.440465 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:06:10.440474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:10.440484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:06:10.440493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:06:10.440503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:06:10.440512 kernel: fuse: init (API version 7.39) Jul 12 00:06:10.440521 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:06:10.440533 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:06:10.440542 kernel: loop: module loaded Jul 12 00:06:10.440551 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:10.440574 systemd-journald[1263]: Collecting audit messages is disabled. Jul 12 00:06:10.440596 kernel: ACPI: bus type drm_connector registered Jul 12 00:06:10.440605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:10.440615 systemd-journald[1263]: Journal started Jul 12 00:06:10.440635 systemd-journald[1263]: Runtime Journal (/run/log/journal/ded8a3de170f44ebbe199d6a0e7ac3ac) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:06:09.413378 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:06:09.570453 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:06:09.570813 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:06:09.571108 systemd[1]: systemd-journald.service: Consumed 3.427s CPU time. Jul 12 00:06:10.458862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:06:10.480400 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:06:10.494879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:10.508846 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:06:10.508905 systemd[1]: Stopped verity-setup.service. Jul 12 00:06:10.525841 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:10.526751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:06:10.532708 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:06:10.538695 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:06:10.544127 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:06:10.550537 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:06:10.556961 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:06:10.563488 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:06:10.569985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:10.576992 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:06:10.577127 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:06:10.584264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:10.584396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:10.591577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:10.591699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:10.598430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:10.598556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:10.606196 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:06:10.606352 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:06:10.613026 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:10.615242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:10.623254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:10.630126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:06:10.640262 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:06:10.654252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:10.672757 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:06:10.688310 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:06:10.695723 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:06:10.703249 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:06:10.703289 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:10.709725 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:06:10.717834 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:06:10.725184 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:06:10.730887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:10.737813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:06:10.744888 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:06:10.751214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:10.753539 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:06:10.759918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:10.761001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:10.768423 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:06:10.780427 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:06:10.794357 systemd-journald[1263]: Time spent on flushing to /var/log/journal/ded8a3de170f44ebbe199d6a0e7ac3ac is 91.516ms for 895 entries. Jul 12 00:06:10.794357 systemd-journald[1263]: System Journal (/var/log/journal/ded8a3de170f44ebbe199d6a0e7ac3ac) is 11.8M, max 2.6G, 2.6G free. Jul 12 00:06:11.030823 systemd-journald[1263]: Received client request to flush runtime journal. Jul 12 00:06:11.030940 systemd-journald[1263]: /var/log/journal/ded8a3de170f44ebbe199d6a0e7ac3ac/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 12 00:06:11.030968 systemd-journald[1263]: Rotating system journal. Jul 12 00:06:11.030994 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:06:10.793403 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:06:10.809972 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:06:10.818441 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:06:10.826449 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:06:10.836809 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:06:10.865494 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:06:10.872574 udevadm[1297]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:06:10.898271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:11.032197 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:06:11.195449 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:06:11.197522 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:06:11.240123 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:06:11.257376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:11.273234 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:06:11.347235 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:06:11.355131 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jul 12 00:06:11.355154 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jul 12 00:06:11.359361 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:11.652250 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:06:11.823325 kernel: loop2: detected capacity change from 0 to 31320 Jul 12 00:06:12.135250 kernel: loop3: detected capacity change from 0 to 207008 Jul 12 00:06:12.158005 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:06:12.172370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:12.186235 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:06:12.197260 kernel: loop5: detected capacity change from 0 to 114328 Jul 12 00:06:12.199793 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jul 12 00:06:12.206286 kernel: loop6: detected capacity change from 0 to 31320 Jul 12 00:06:12.216233 kernel: loop7: detected capacity change from 0 to 207008 Jul 12 00:06:12.221915 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 12 00:06:12.222354 (sd-merge)[1324]: Merged extensions into '/usr'. Jul 12 00:06:12.225824 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:06:12.225839 systemd[1]: Reloading... Jul 12 00:06:12.289272 zram_generator::config[1353]: No configuration found. Jul 12 00:06:12.418136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:12.474397 systemd[1]: Reloading finished in 248 ms. Jul 12 00:06:12.513385 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:06:12.525385 systemd[1]: Starting ensure-sysext.service... Jul 12 00:06:12.530499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:06:12.552665 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:06:12.553318 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:06:12.554038 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:06:12.554375 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jul 12 00:06:12.554423 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jul 12 00:06:12.555520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:12.558000 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:12.558185 systemd-tmpfiles[1406]: Skipping /boot Jul 12 00:06:12.569771 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:12.569906 systemd-tmpfiles[1406]: Skipping /boot Jul 12 00:06:12.573162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:06:12.586366 systemd[1]: Reloading requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:06:12.586386 systemd[1]: Reloading... Jul 12 00:06:12.703240 zram_generator::config[1452]: No configuration found. Jul 12 00:06:12.805838 kernel: hv_vmbus: registering driver hv_balloon Jul 12 00:06:12.805930 kernel: hv_vmbus: registering driver hyperv_fb Jul 12 00:06:12.805949 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 12 00:06:12.815191 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 12 00:06:12.828667 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 12 00:06:12.828764 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 12 00:06:12.828785 kernel: Console: switching to colour dummy device 80x25 Jul 12 00:06:12.843285 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:06:12.892734 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:06:12.913023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:12.969201 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1422) Jul 12 00:06:12.981351 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:06:12.981559 systemd[1]: Reloading finished in 394 ms. Jul 12 00:06:12.998529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:13.045959 systemd[1]: Finished ensure-sysext.service. Jul 12 00:06:13.068923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:06:13.076535 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:06:13.093384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:13.101023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:06:13.107524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:13.109434 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:06:13.117756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:13.125960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:13.140431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:13.149926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:13.156478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:13.158848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:06:13.167794 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:06:13.182409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:06:13.191418 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:06:13.205397 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:06:13.216709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:06:13.224990 lvm[1573]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:06:13.227612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:13.239040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:13.239347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:13.248569 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:13.248715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:13.255974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:13.257264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:13.268389 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:13.268553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:13.275698 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:06:13.286267 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:06:13.294453 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:06:13.303554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:06:13.318259 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:13.333802 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:06:13.343103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:13.343565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:13.347880 augenrules[1611]: No rules Jul 12 00:06:13.354290 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:13.360023 lvm[1613]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:06:13.364097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:06:13.387249 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:06:13.426794 systemd-networkd[1409]: lo: Link UP Jul 12 00:06:13.426808 systemd-networkd[1409]: lo: Gained carrier Jul 12 00:06:13.429167 systemd-networkd[1409]: Enumeration completed Jul 12 00:06:13.429347 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:06:13.430874 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:13.430880 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:13.442422 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:06:13.490230 kernel: mlx5_core ede0:00:02.0 enP60896s1: Link up Jul 12 00:06:13.516976 systemd-resolved[1588]: Positive Trust Anchors: Jul 12 00:06:13.517490 kernel: hv_netvsc 00224879-84d6-0022-4879-84d600224879 eth0: Data path switched to VF: enP60896s1 Jul 12 00:06:13.516990 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:06:13.517022 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:06:13.518151 systemd-networkd[1409]: enP60896s1: Link UP Jul 12 00:06:13.518276 systemd-networkd[1409]: eth0: Link UP Jul 12 00:06:13.518285 systemd-networkd[1409]: eth0: Gained carrier Jul 12 00:06:13.518300 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:13.523496 systemd-networkd[1409]: enP60896s1: Gained carrier Jul 12 00:06:13.532273 systemd-networkd[1409]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:06:13.550804 systemd-resolved[1588]: Using system hostname 'ci-4081.3.4-n-ddca76aad7'. Jul 12 00:06:13.552382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:06:13.558538 systemd[1]: Reached target network.target - Network. Jul 12 00:06:13.563728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:13.913629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:14.036676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:06:14.043881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:06:14.832402 systemd-networkd[1409]: eth0: Gained IPv6LL Jul 12 00:06:14.835311 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:06:14.842610 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:06:14.960455 systemd-networkd[1409]: enP60896s1: Gained IPv6LL Jul 12 00:06:16.369242 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:06:16.392380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:06:16.402448 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:06:16.430236 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:06:16.436343 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:16.442096 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:06:16.448565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:06:16.455473 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:06:16.461912 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:06:16.468566 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:06:16.475728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:06:16.475765 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:06:16.480536 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:06:16.486156 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:06:16.493474 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:06:16.501839 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:06:16.508438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:06:16.514363 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:06:16.519582 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:16.524477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:16.524504 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:16.533305 systemd[1]: Starting chronyd.service - NTP client/server... Jul 12 00:06:16.542344 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:06:16.553416 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:06:16.560554 (chronyd)[1634]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 12 00:06:16.564869 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:06:16.570873 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:06:16.578414 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:06:16.584704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:06:16.584816 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 12 00:06:16.586367 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 12 00:06:16.600963 jq[1640]: false Jul 12 00:06:16.600171 KVP[1642]: KVP starting; pid is:1642 Jul 12 00:06:16.600022 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 12 00:06:16.607357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:16.617124 KVP[1642]: KVP LIC Version: 3.1 Jul 12 00:06:16.617282 kernel: hv_utils: KVP IC version 4.0 Jul 12 00:06:16.618397 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:06:16.625261 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:06:16.634374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:06:16.639047 extend-filesystems[1641]: Found loop4 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found loop5 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found loop6 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found loop7 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda1 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda2 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda3 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found usr Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda4 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda6 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda7 Jul 12 00:06:16.652953 extend-filesystems[1641]: Found sda9 Jul 12 00:06:16.652953 extend-filesystems[1641]: Checking size of /dev/sda9 Jul 12 00:06:16.651684 chronyd[1652]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 12 00:06:16.786677 extend-filesystems[1641]: Old size kept for /dev/sda9 Jul 12 00:06:16.786677 extend-filesystems[1641]: Found sr0 Jul 12 00:06:16.653583 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:06:16.658514 chronyd[1652]: Timezone right/UTC failed leap second check, ignoring Jul 12 00:06:16.681496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:06:16.658757 chronyd[1652]: Loaded seccomp filter (level 2) Jul 12 00:06:16.702391 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:06:16.730621 dbus-daemon[1637]: [system] SELinux support is enabled Jul 12 00:06:16.717691 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:06:16.718173 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:06:16.731116 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:06:16.808361 jq[1667]: true Jul 12 00:06:16.755365 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:06:16.768670 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:06:16.780723 systemd[1]: Started chronyd.service - NTP client/server. Jul 12 00:06:16.806656 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:06:16.806859 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:06:16.807127 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:06:16.809314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:06:16.823946 coreos-metadata[1636]: Jul 12 00:06:16.823 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:06:16.849960 coreos-metadata[1636]: Jul 12 00:06:16.845 INFO Fetch successful Jul 12 00:06:16.849960 coreos-metadata[1636]: Jul 12 00:06:16.845 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 12 00:06:16.850027 update_engine[1662]: I20250712 00:06:16.835148 1662 main.cc:92] Flatcar Update Engine starting Jul 12 00:06:16.850027 update_engine[1662]: I20250712 00:06:16.847980 1662 update_check_scheduler.cc:74] Next update check in 11m8s Jul 12 00:06:16.826629 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:06:16.851402 coreos-metadata[1636]: Jul 12 00:06:16.850 INFO Fetch successful Jul 12 00:06:16.851402 coreos-metadata[1636]: Jul 12 00:06:16.850 INFO Fetching http://168.63.129.16/machine/14ac27a5-3af7-4de3-8c4d-7115f0087017/4ec746e7%2D1b47%2D4e6e%2Dbb67%2D9705f2fb9488.%5Fci%2D4081.3.4%2Dn%2Dddca76aad7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 12 00:06:16.827327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:06:16.835844 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:06:16.852757 coreos-metadata[1636]: Jul 12 00:06:16.852 INFO Fetch successful Jul 12 00:06:16.852757 coreos-metadata[1636]: Jul 12 00:06:16.852 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:06:16.861692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:06:16.863273 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:06:16.874174 coreos-metadata[1636]: Jul 12 00:06:16.873 INFO Fetch successful Jul 12 00:06:16.891834 jq[1692]: true Jul 12 00:06:16.904756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:06:16.904802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:06:16.913968 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:06:16.913994 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:06:16.921718 (ntainerd)[1693]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:06:16.921911 systemd-logind[1659]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 00:06:16.922111 systemd-logind[1659]: New seat seat0. Jul 12 00:06:16.926637 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:06:16.936740 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:06:16.950892 tar[1689]: linux-arm64/LICENSE Jul 12 00:06:16.983460 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1684) Jul 12 00:06:16.977453 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:06:16.983577 tar[1689]: linux-arm64/helm Jul 12 00:06:16.987642 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:06:17.004161 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:06:17.141031 bash[1747]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:06:17.144537 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:06:17.153753 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:06:17.227332 locksmithd[1720]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:06:17.544282 containerd[1693]: time="2025-07-12T00:06:17.542827640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:06:17.623602 containerd[1693]: time="2025-07-12T00:06:17.623514320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.627748160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.627795360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.627815240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.628550000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.628575000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629141840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629162400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629425600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629444160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629457840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:17.629851 containerd[1693]: time="2025-07-12T00:06:17.629467200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.630335 containerd[1693]: time="2025-07-12T00:06:17.630306800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.630538 containerd[1693]: time="2025-07-12T00:06:17.630515000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:17.630655 containerd[1693]: time="2025-07-12T00:06:17.630631960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:17.630655 containerd[1693]: time="2025-07-12T00:06:17.630650920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:06:17.630746 containerd[1693]: time="2025-07-12T00:06:17.630726240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:06:17.630795 containerd[1693]: time="2025-07-12T00:06:17.630776520Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:06:17.656822 containerd[1693]: time="2025-07-12T00:06:17.656777760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:06:17.657326 containerd[1693]: time="2025-07-12T00:06:17.657302640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:06:17.657376 containerd[1693]: time="2025-07-12T00:06:17.657335440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:06:17.657376 containerd[1693]: time="2025-07-12T00:06:17.657353840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:06:17.657376 containerd[1693]: time="2025-07-12T00:06:17.657369040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:06:17.657545 containerd[1693]: time="2025-07-12T00:06:17.657518520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.657805400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.657958800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.657976680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.657991280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658007200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658020440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658035440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658050760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658066520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658079840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658116960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658130520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658152160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.658803 containerd[1693]: time="2025-07-12T00:06:17.658172200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658185720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658199640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658248080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658264640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658277600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658291560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658305520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658322920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658334520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658346520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658360640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658376720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658398640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658411960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659166 containerd[1693]: time="2025-07-12T00:06:17.658422680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658479560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658497680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658509000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658521240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658531760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658544840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658555160Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:06:17.659479 containerd[1693]: time="2025-07-12T00:06:17.658577760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:06:17.659992 containerd[1693]: time="2025-07-12T00:06:17.658868800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:06:17.659992 containerd[1693]: time="2025-07-12T00:06:17.658925520Z" level=info msg="Connect containerd service" Jul 12 00:06:17.659992 containerd[1693]: time="2025-07-12T00:06:17.658951680Z" level=info msg="using legacy CRI server" Jul 12 00:06:17.659992 containerd[1693]: time="2025-07-12T00:06:17.658958160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:06:17.659992 containerd[1693]: time="2025-07-12T00:06:17.659061800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:06:17.660920 sshd_keygen[1677]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.664869920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665264040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665305960Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665340560Z" level=info msg="Start subscribing containerd event" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665374560Z" level=info msg="Start recovering state" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665440080Z" level=info msg="Start event monitor" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665451000Z" level=info msg="Start snapshots syncer" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665459840Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:06:17.666003 containerd[1693]: time="2025-07-12T00:06:17.665474640Z" level=info msg="Start streaming server" Jul 12 00:06:17.665636 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:06:17.675613 containerd[1693]: time="2025-07-12T00:06:17.674420280Z" level=info msg="containerd successfully booted in 0.132572s" Jul 12 00:06:17.700805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:06:17.712446 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:06:17.725943 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 12 00:06:17.741726 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:06:17.741928 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:06:17.762300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:06:17.787068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:06:17.800434 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 12 00:06:17.811429 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:06:17.823594 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:06:17.829962 tar[1689]: linux-arm64/README.md Jul 12 00:06:17.830891 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:06:17.846918 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:06:17.982892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:17.989543 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:06:17.989894 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:18.001278 systemd[1]: Startup finished in 729ms (kernel) + 19.506s (initrd) + 18.803s (userspace) = 39.040s. Jul 12 00:06:18.247925 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:18.249561 login[1793]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:18.256854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:06:18.264304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:06:18.267169 systemd-logind[1659]: New session 2 of user core. Jul 12 00:06:18.271400 systemd-logind[1659]: New session 1 of user core. Jul 12 00:06:18.281478 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:06:18.293637 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:06:18.297940 (systemd)[1814]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:06:18.444521 kubelet[1802]: E0712 00:06:18.444456 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:18.447550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:18.447682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:18.454008 systemd[1814]: Queued start job for default target default.target. Jul 12 00:06:18.464468 systemd[1814]: Created slice app.slice - User Application Slice. Jul 12 00:06:18.464801 systemd[1814]: Reached target paths.target - Paths. Jul 12 00:06:18.464817 systemd[1814]: Reached target timers.target - Timers. Jul 12 00:06:18.466190 systemd[1814]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:06:18.477514 systemd[1814]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:06:18.477628 systemd[1814]: Reached target sockets.target - Sockets. Jul 12 00:06:18.477640 systemd[1814]: Reached target basic.target - Basic System. Jul 12 00:06:18.477679 systemd[1814]: Reached target default.target - Main User Target. Jul 12 00:06:18.477706 systemd[1814]: Startup finished in 170ms. Jul 12 00:06:18.477833 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:06:18.485416 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:06:18.486136 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:06:19.390474 waagent[1790]: 2025-07-12T00:06:19.390381Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 12 00:06:19.396076 waagent[1790]: 2025-07-12T00:06:19.396008Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 12 00:06:19.400826 waagent[1790]: 2025-07-12T00:06:19.400771Z INFO Daemon Daemon Python: 3.11.9 Jul 12 00:06:19.408226 waagent[1790]: 2025-07-12T00:06:19.407316Z INFO Daemon Daemon Run daemon Jul 12 00:06:19.411535 waagent[1790]: 2025-07-12T00:06:19.411446Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 12 00:06:19.425213 waagent[1790]: 2025-07-12T00:06:19.420331Z INFO Daemon Daemon Using waagent for provisioning Jul 12 00:06:19.425707 waagent[1790]: 2025-07-12T00:06:19.425666Z INFO Daemon Daemon Activate resource disk Jul 12 00:06:19.430420 waagent[1790]: 2025-07-12T00:06:19.430365Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 12 00:06:19.441711 waagent[1790]: 2025-07-12T00:06:19.441648Z INFO Daemon Daemon Found device: None Jul 12 00:06:19.446012 waagent[1790]: 2025-07-12T00:06:19.445963Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 12 00:06:19.455377 waagent[1790]: 2025-07-12T00:06:19.455314Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 12 00:06:19.469230 waagent[1790]: 2025-07-12T00:06:19.469152Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:06:19.475011 waagent[1790]: 2025-07-12T00:06:19.474955Z INFO Daemon Daemon Running default provisioning handler Jul 12 00:06:19.487079 waagent[1790]: 2025-07-12T00:06:19.487006Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 12 00:06:19.500522 waagent[1790]: 2025-07-12T00:06:19.500458Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 12 00:06:19.509927 waagent[1790]: 2025-07-12T00:06:19.509859Z INFO Daemon Daemon cloud-init is enabled: False Jul 12 00:06:19.515074 waagent[1790]: 2025-07-12T00:06:19.515007Z INFO Daemon Daemon Copying ovf-env.xml Jul 12 00:06:19.840056 waagent[1790]: 2025-07-12T00:06:19.839960Z INFO Daemon Daemon Successfully mounted dvd Jul 12 00:06:19.854642 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 12 00:06:19.856772 waagent[1790]: 2025-07-12T00:06:19.856701Z INFO Daemon Daemon Detect protocol endpoint Jul 12 00:06:19.861757 waagent[1790]: 2025-07-12T00:06:19.861693Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:06:19.867575 waagent[1790]: 2025-07-12T00:06:19.867515Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 12 00:06:19.874075 waagent[1790]: 2025-07-12T00:06:19.874016Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 12 00:06:19.879668 waagent[1790]: 2025-07-12T00:06:19.879612Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 12 00:06:19.884927 waagent[1790]: 2025-07-12T00:06:19.884871Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 12 00:06:19.915659 waagent[1790]: 2025-07-12T00:06:19.915615Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 12 00:06:19.922100 waagent[1790]: 2025-07-12T00:06:19.922068Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 12 00:06:19.927300 waagent[1790]: 2025-07-12T00:06:19.927247Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 12 00:06:20.235246 waagent[1790]: 2025-07-12T00:06:20.235085Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 12 00:06:20.242521 waagent[1790]: 2025-07-12T00:06:20.242448Z INFO Daemon Daemon Forcing an update of the goal state. Jul 12 00:06:20.251917 waagent[1790]: 2025-07-12T00:06:20.251862Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:06:20.274550 waagent[1790]: 2025-07-12T00:06:20.274505Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 12 00:06:20.280720 waagent[1790]: 2025-07-12T00:06:20.280673Z INFO Daemon Jul 12 00:06:20.283600 waagent[1790]: 2025-07-12T00:06:20.283549Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0edb3487-9fb4-4eab-9df1-0113e93441ca eTag: 11943830164329829282 source: Fabric] Jul 12 00:06:20.296495 waagent[1790]: 2025-07-12T00:06:20.296447Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 12 00:06:20.303675 waagent[1790]: 2025-07-12T00:06:20.303628Z INFO Daemon Jul 12 00:06:20.306614 waagent[1790]: 2025-07-12T00:06:20.306566Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:06:20.318009 waagent[1790]: 2025-07-12T00:06:20.317970Z INFO Daemon Daemon Downloading artifacts profile blob Jul 12 00:06:20.406064 waagent[1790]: 2025-07-12T00:06:20.405968Z INFO Daemon Downloaded certificate {'thumbprint': '4F0122C119BA5C33BA350F937F901BD5F518B5E5', 'hasPrivateKey': True} Jul 12 00:06:20.415778 waagent[1790]: 2025-07-12T00:06:20.415723Z INFO Daemon Downloaded certificate {'thumbprint': '218F5D8ADFBF4CC9849AB54EC70577F16AB5B6B5', 'hasPrivateKey': False} Jul 12 00:06:20.425532 waagent[1790]: 2025-07-12T00:06:20.425476Z INFO Daemon Fetch goal state completed Jul 12 00:06:20.436838 waagent[1790]: 2025-07-12T00:06:20.436783Z INFO Daemon Daemon Starting provisioning Jul 12 00:06:20.441748 waagent[1790]: 2025-07-12T00:06:20.441675Z INFO Daemon Daemon Handle ovf-env.xml. Jul 12 00:06:20.446493 waagent[1790]: 2025-07-12T00:06:20.446438Z INFO Daemon Daemon Set hostname [ci-4081.3.4-n-ddca76aad7] Jul 12 00:06:20.467239 waagent[1790]: 2025-07-12T00:06:20.466602Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-n-ddca76aad7] Jul 12 00:06:20.473333 waagent[1790]: 2025-07-12T00:06:20.473266Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 12 00:06:20.479751 waagent[1790]: 2025-07-12T00:06:20.479691Z INFO Daemon Daemon Primary interface is [eth0] Jul 12 00:06:20.509457 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:20.509466 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:20.509516 systemd-networkd[1409]: eth0: DHCP lease lost Jul 12 00:06:20.511357 waagent[1790]: 2025-07-12T00:06:20.510591Z INFO Daemon Daemon Create user account if not exists Jul 12 00:06:20.516175 waagent[1790]: 2025-07-12T00:06:20.516111Z INFO Daemon Daemon User core already exists, skip useradd Jul 12 00:06:20.521792 waagent[1790]: 2025-07-12T00:06:20.521724Z INFO Daemon Daemon Configure sudoer Jul 12 00:06:20.522309 systemd-networkd[1409]: eth0: DHCPv6 lease lost Jul 12 00:06:20.526702 waagent[1790]: 2025-07-12T00:06:20.526627Z INFO Daemon Daemon Configure sshd Jul 12 00:06:20.531293 waagent[1790]: 2025-07-12T00:06:20.531227Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 12 00:06:20.544509 waagent[1790]: 2025-07-12T00:06:20.544288Z INFO Daemon Daemon Deploy ssh public key. Jul 12 00:06:20.558312 systemd-networkd[1409]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:06:21.634228 waagent[1790]: 2025-07-12T00:06:21.633841Z INFO Daemon Daemon Provisioning complete Jul 12 00:06:21.651810 waagent[1790]: 2025-07-12T00:06:21.651764Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 12 00:06:21.658227 waagent[1790]: 2025-07-12T00:06:21.658155Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 12 00:06:21.667440 waagent[1790]: 2025-07-12T00:06:21.667380Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 12 00:06:21.798744 waagent[1870]: 2025-07-12T00:06:21.798667Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 12 00:06:21.799716 waagent[1870]: 2025-07-12T00:06:21.799168Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 12 00:06:21.799716 waagent[1870]: 2025-07-12T00:06:21.799270Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 12 00:06:21.846547 waagent[1870]: 2025-07-12T00:06:21.846455Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 12 00:06:21.846740 waagent[1870]: 2025-07-12T00:06:21.846701Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:21.846805 waagent[1870]: 2025-07-12T00:06:21.846775Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:21.854977 waagent[1870]: 2025-07-12T00:06:21.854908Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:06:21.870528 waagent[1870]: 2025-07-12T00:06:21.870480Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 12 00:06:21.871073 waagent[1870]: 2025-07-12T00:06:21.871025Z INFO ExtHandler Jul 12 00:06:21.871140 waagent[1870]: 2025-07-12T00:06:21.871110Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 43999efa-e393-4b34-a095-ee3cc0c2b2bd eTag: 11943830164329829282 source: Fabric] Jul 12 00:06:21.871453 waagent[1870]: 2025-07-12T00:06:21.871411Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 12 00:06:21.872019 waagent[1870]: 2025-07-12T00:06:21.871975Z INFO ExtHandler Jul 12 00:06:21.872079 waagent[1870]: 2025-07-12T00:06:21.872052Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:06:21.876264 waagent[1870]: 2025-07-12T00:06:21.876224Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 12 00:06:21.979284 waagent[1870]: 2025-07-12T00:06:21.977360Z INFO ExtHandler Downloaded certificate {'thumbprint': '4F0122C119BA5C33BA350F937F901BD5F518B5E5', 'hasPrivateKey': True} Jul 12 00:06:21.979284 waagent[1870]: 2025-07-12T00:06:21.977916Z INFO ExtHandler Downloaded certificate {'thumbprint': '218F5D8ADFBF4CC9849AB54EC70577F16AB5B6B5', 'hasPrivateKey': False} Jul 12 00:06:21.979284 waagent[1870]: 2025-07-12T00:06:21.978344Z INFO ExtHandler Fetch goal state completed Jul 12 00:06:21.995069 waagent[1870]: 2025-07-12T00:06:21.994999Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1870 Jul 12 00:06:21.995363 waagent[1870]: 2025-07-12T00:06:21.995328Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 12 00:06:21.997141 waagent[1870]: 2025-07-12T00:06:21.997099Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 12 00:06:21.998156 waagent[1870]: 2025-07-12T00:06:21.998110Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 12 00:06:22.175316 waagent[1870]: 2025-07-12T00:06:22.175278Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 12 00:06:22.175638 waagent[1870]: 2025-07-12T00:06:22.175601Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 12 00:06:22.181557 waagent[1870]: 2025-07-12T00:06:22.181522Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 12 00:06:22.188270 systemd[1]: Reloading requested from client PID 1885 ('systemctl') (unit waagent.service)... Jul 12 00:06:22.188286 systemd[1]: Reloading... Jul 12 00:06:22.261236 zram_generator::config[1916]: No configuration found. Jul 12 00:06:22.382641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:22.462407 systemd[1]: Reloading finished in 273 ms. Jul 12 00:06:22.485282 waagent[1870]: 2025-07-12T00:06:22.483840Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 12 00:06:22.490441 systemd[1]: Reloading requested from client PID 1973 ('systemctl') (unit waagent.service)... Jul 12 00:06:22.490457 systemd[1]: Reloading... Jul 12 00:06:22.576247 zram_generator::config[2010]: No configuration found. Jul 12 00:06:22.682561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:22.758486 systemd[1]: Reloading finished in 267 ms. Jul 12 00:06:22.783217 waagent[1870]: 2025-07-12T00:06:22.780423Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 12 00:06:22.783217 waagent[1870]: 2025-07-12T00:06:22.780611Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 12 00:06:23.134900 waagent[1870]: 2025-07-12T00:06:23.134814Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 12 00:06:23.135519 waagent[1870]: 2025-07-12T00:06:23.135462Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 12 00:06:23.136367 waagent[1870]: 2025-07-12T00:06:23.136277Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 12 00:06:23.136948 waagent[1870]: 2025-07-12T00:06:23.136726Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 12 00:06:23.136948 waagent[1870]: 2025-07-12T00:06:23.136898Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:23.137227 waagent[1870]: 2025-07-12T00:06:23.137164Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:23.138144 waagent[1870]: 2025-07-12T00:06:23.137393Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:23.138144 waagent[1870]: 2025-07-12T00:06:23.137553Z INFO EnvHandler ExtHandler Configure routes Jul 12 00:06:23.138144 waagent[1870]: 2025-07-12T00:06:23.137615Z INFO EnvHandler ExtHandler Gateway:None Jul 12 00:06:23.138144 waagent[1870]: 2025-07-12T00:06:23.137657Z INFO EnvHandler ExtHandler Routes:None Jul 12 00:06:23.138428 waagent[1870]: 2025-07-12T00:06:23.138375Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:23.138750 waagent[1870]: 2025-07-12T00:06:23.138702Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 12 00:06:23.139007 waagent[1870]: 2025-07-12T00:06:23.138965Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 12 00:06:23.139007 waagent[1870]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 12 00:06:23.139007 waagent[1870]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 12 00:06:23.139007 waagent[1870]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 12 00:06:23.139007 waagent[1870]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:23.139007 waagent[1870]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:23.139007 waagent[1870]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:23.139900 waagent[1870]: 2025-07-12T00:06:23.139843Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 12 00:06:23.140379 waagent[1870]: 2025-07-12T00:06:23.140329Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 12 00:06:23.140786 waagent[1870]: 2025-07-12T00:06:23.140741Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 12 00:06:23.140889 waagent[1870]: 2025-07-12T00:06:23.140854Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 12 00:06:23.141042 waagent[1870]: 2025-07-12T00:06:23.141010Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 12 00:06:23.151906 waagent[1870]: 2025-07-12T00:06:23.151855Z INFO ExtHandler ExtHandler Jul 12 00:06:23.152124 waagent[1870]: 2025-07-12T00:06:23.152089Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a9eb0c42-ad40-4598-8c2f-d02bdef48103 correlation 03dd672f-db5c-4cb9-ac18-6c7dab35507e created: 2025-07-12T00:04:50.389183Z] Jul 12 00:06:23.152614 waagent[1870]: 2025-07-12T00:06:23.152575Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 12 00:06:23.153983 waagent[1870]: 2025-07-12T00:06:23.153257Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 12 00:06:23.187165 waagent[1870]: 2025-07-12T00:06:23.186956Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 465E3874-2DF5-479B-AA6E-FC48084EF0FB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 12 00:06:23.191616 waagent[1870]: 2025-07-12T00:06:23.191540Z INFO MonitorHandler ExtHandler Network interfaces: Jul 12 00:06:23.191616 waagent[1870]: Executing ['ip', '-a', '-o', 'link']: Jul 12 00:06:23.191616 waagent[1870]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 12 00:06:23.191616 waagent[1870]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:84:d6 brd ff:ff:ff:ff:ff:ff Jul 12 00:06:23.191616 waagent[1870]: 3: enP60896s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:84:d6 brd ff:ff:ff:ff:ff:ff\ altname enP60896p0s2 Jul 12 00:06:23.191616 waagent[1870]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 12 00:06:23.191616 waagent[1870]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 12 00:06:23.191616 waagent[1870]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 12 00:06:23.191616 waagent[1870]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 12 00:06:23.191616 waagent[1870]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 12 00:06:23.191616 waagent[1870]: 2: eth0 inet6 fe80::222:48ff:fe79:84d6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:06:23.191616 waagent[1870]: 3: enP60896s1 inet6 fe80::222:48ff:fe79:84d6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:06:23.255794 waagent[1870]: 2025-07-12T00:06:23.255709Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 12 00:06:23.255794 waagent[1870]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.255794 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.255794 waagent[1870]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.255794 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.255794 waagent[1870]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.255794 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.255794 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:06:23.255794 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:06:23.255794 waagent[1870]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:06:23.258719 waagent[1870]: 2025-07-12T00:06:23.258655Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 12 00:06:23.258719 waagent[1870]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.258719 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.258719 waagent[1870]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.258719 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.258719 waagent[1870]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:23.258719 waagent[1870]: pkts bytes target prot opt in out source destination Jul 12 00:06:23.258719 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:06:23.258719 waagent[1870]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:06:23.258719 waagent[1870]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:06:23.258965 waagent[1870]: 2025-07-12T00:06:23.258928Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 12 00:06:28.478104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:28.486460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:28.583037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:28.587366 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:28.716936 kubelet[2100]: E0712 00:06:28.716857 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:28.719978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:28.720134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:37.333363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:06:37.339487 systemd[1]: Started sshd@0-10.200.20.43:22-10.200.16.10:59820.service - OpenSSH per-connection server daemon (10.200.16.10:59820). Jul 12 00:06:37.842902 sshd[2107]: Accepted publickey for core from 10.200.16.10 port 59820 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:37.844228 sshd[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:37.848293 systemd-logind[1659]: New session 3 of user core. Jul 12 00:06:37.854354 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:06:38.260676 systemd[1]: Started sshd@1-10.200.20.43:22-10.200.16.10:59824.service - OpenSSH per-connection server daemon (10.200.16.10:59824). Jul 12 00:06:38.685855 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 59824 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:38.686944 sshd[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:38.690657 systemd-logind[1659]: New session 4 of user core. Jul 12 00:06:38.701356 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:06:38.727933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:06:38.736487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:39.014789 sshd[2112]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:39.017386 systemd[1]: sshd@1-10.200.20.43:22-10.200.16.10:59824.service: Deactivated successfully. Jul 12 00:06:39.019050 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:06:39.020788 systemd-logind[1659]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:06:39.021754 systemd-logind[1659]: Removed session 4. Jul 12 00:06:39.067908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:39.072257 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:39.104506 systemd[1]: Started sshd@2-10.200.20.43:22-10.200.16.10:59830.service - OpenSSH per-connection server daemon (10.200.16.10:59830). Jul 12 00:06:39.116370 kubelet[2126]: E0712 00:06:39.116322 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:39.118968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:39.119109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:39.567791 sshd[2133]: Accepted publickey for core from 10.200.16.10 port 59830 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:39.569502 sshd[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:39.573457 systemd-logind[1659]: New session 5 of user core. Jul 12 00:06:39.579361 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:06:39.911659 sshd[2133]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:39.914382 systemd-logind[1659]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:06:39.914673 systemd[1]: sshd@2-10.200.20.43:22-10.200.16.10:59830.service: Deactivated successfully. Jul 12 00:06:39.916459 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:06:39.918043 systemd-logind[1659]: Removed session 5. Jul 12 00:06:40.004475 systemd[1]: Started sshd@3-10.200.20.43:22-10.200.16.10:53978.service - OpenSSH per-connection server daemon (10.200.16.10:53978). Jul 12 00:06:40.447335 sshd[2142]: Accepted publickey for core from 10.200.16.10 port 53978 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:40.448635 sshd[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:40.453270 systemd-logind[1659]: New session 6 of user core. Jul 12 00:06:40.459389 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:06:40.462334 chronyd[1652]: Selected source PHC0 Jul 12 00:06:40.790644 sshd[2142]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:40.793947 systemd[1]: sshd@3-10.200.20.43:22-10.200.16.10:53978.service: Deactivated successfully. Jul 12 00:06:40.795396 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:06:40.795965 systemd-logind[1659]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:06:40.796926 systemd-logind[1659]: Removed session 6. Jul 12 00:06:40.877437 systemd[1]: Started sshd@4-10.200.20.43:22-10.200.16.10:53980.service - OpenSSH per-connection server daemon (10.200.16.10:53980). Jul 12 00:06:41.371327 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 53980 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:41.372626 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:41.376309 systemd-logind[1659]: New session 7 of user core. Jul 12 00:06:41.384338 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:06:41.748978 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:06:41.749269 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:41.777316 sudo[2152]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:41.862813 sshd[2149]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:41.866513 systemd[1]: sshd@4-10.200.20.43:22-10.200.16.10:53980.service: Deactivated successfully. Jul 12 00:06:41.868002 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:06:41.869432 systemd-logind[1659]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:06:41.870774 systemd-logind[1659]: Removed session 7. Jul 12 00:06:41.940646 systemd[1]: Started sshd@5-10.200.20.43:22-10.200.16.10:53982.service - OpenSSH per-connection server daemon (10.200.16.10:53982). Jul 12 00:06:42.365153 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 53982 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:42.366524 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:42.371163 systemd-logind[1659]: New session 8 of user core. Jul 12 00:06:42.376379 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:06:42.610002 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:06:42.610431 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:42.613596 sudo[2161]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:42.618142 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:06:42.618578 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:42.630659 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:42.631828 auditctl[2164]: No rules Jul 12 00:06:42.632588 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:06:42.632764 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:42.634849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:42.655813 augenrules[2182]: No rules Jul 12 00:06:42.656973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:42.658072 sudo[2160]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:42.743428 sshd[2157]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:42.745832 systemd-logind[1659]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:06:42.746849 systemd[1]: sshd@5-10.200.20.43:22-10.200.16.10:53982.service: Deactivated successfully. Jul 12 00:06:42.748141 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:06:42.750095 systemd-logind[1659]: Removed session 8. Jul 12 00:06:42.832859 systemd[1]: Started sshd@6-10.200.20.43:22-10.200.16.10:53990.service - OpenSSH per-connection server daemon (10.200.16.10:53990). Jul 12 00:06:43.304752 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 53990 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:43.305977 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:43.309566 systemd-logind[1659]: New session 9 of user core. Jul 12 00:06:43.319330 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:06:43.572712 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:06:43.572970 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:44.817490 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:06:44.817615 (dockerd)[2208]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:06:45.469992 dockerd[2208]: time="2025-07-12T00:06:45.469944291Z" level=info msg="Starting up" Jul 12 00:06:45.719436 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1313859549-merged.mount: Deactivated successfully. Jul 12 00:06:45.758707 dockerd[2208]: time="2025-07-12T00:06:45.758667011Z" level=info msg="Loading containers: start." Jul 12 00:06:45.924251 kernel: Initializing XFRM netlink socket Jul 12 00:06:46.040452 systemd-networkd[1409]: docker0: Link UP Jul 12 00:06:46.074326 dockerd[2208]: time="2025-07-12T00:06:46.074290891Z" level=info msg="Loading containers: done." Jul 12 00:06:46.101285 dockerd[2208]: time="2025-07-12T00:06:46.101230011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:06:46.101437 dockerd[2208]: time="2025-07-12T00:06:46.101344811Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:06:46.101481 dockerd[2208]: time="2025-07-12T00:06:46.101457771Z" level=info msg="Daemon has completed initialization" Jul 12 00:06:46.166698 dockerd[2208]: time="2025-07-12T00:06:46.166529531Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:06:46.166927 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:06:46.716341 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1667240225-merged.mount: Deactivated successfully. Jul 12 00:06:47.059096 containerd[1693]: time="2025-07-12T00:06:47.059057251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:06:47.956007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262542882.mount: Deactivated successfully. Jul 12 00:06:49.227971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:06:49.236681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:49.344373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:49.348944 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:49.458574 kubelet[2405]: E0712 00:06:49.458455 2405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:49.460735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:49.460886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:49.791584 containerd[1693]: time="2025-07-12T00:06:49.791530868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:49.796989 containerd[1693]: time="2025-07-12T00:06:49.796778627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 12 00:06:49.803291 containerd[1693]: time="2025-07-12T00:06:49.803236676Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:49.812553 containerd[1693]: time="2025-07-12T00:06:49.812503906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:49.814029 containerd[1693]: time="2025-07-12T00:06:49.813567514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.754466343s" Jul 12 00:06:49.814029 containerd[1693]: time="2025-07-12T00:06:49.813611794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:06:49.814324 containerd[1693]: time="2025-07-12T00:06:49.814300079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:06:51.155465 containerd[1693]: time="2025-07-12T00:06:51.155419422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.160735 containerd[1693]: time="2025-07-12T00:06:51.160693942Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 12 00:06:51.166411 containerd[1693]: time="2025-07-12T00:06:51.166361704Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.177023 containerd[1693]: time="2025-07-12T00:06:51.176948624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.178393 containerd[1693]: time="2025-07-12T00:06:51.178188553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.363781873s" Jul 12 00:06:51.178393 containerd[1693]: time="2025-07-12T00:06:51.178252274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:06:51.179042 containerd[1693]: time="2025-07-12T00:06:51.178945479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:06:52.691294 containerd[1693]: time="2025-07-12T00:06:52.691243471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:52.697165 containerd[1693]: time="2025-07-12T00:06:52.697119355Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 12 00:06:52.704367 containerd[1693]: time="2025-07-12T00:06:52.704311250Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:52.710668 containerd[1693]: time="2025-07-12T00:06:52.710609497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:52.711899 containerd[1693]: time="2025-07-12T00:06:52.711594065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.532613865s" Jul 12 00:06:52.711899 containerd[1693]: time="2025-07-12T00:06:52.711630905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:06:52.712712 containerd[1693]: time="2025-07-12T00:06:52.712682313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:06:53.858434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725180962.mount: Deactivated successfully. Jul 12 00:06:54.228564 containerd[1693]: time="2025-07-12T00:06:54.228510131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:54.233280 containerd[1693]: time="2025-07-12T00:06:54.233233607Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 12 00:06:54.239999 containerd[1693]: time="2025-07-12T00:06:54.239935937Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:54.248753 containerd[1693]: time="2025-07-12T00:06:54.248682323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:54.249448 containerd[1693]: time="2025-07-12T00:06:54.249298448Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.536426734s" Jul 12 00:06:54.249448 containerd[1693]: time="2025-07-12T00:06:54.249336408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:06:54.249920 containerd[1693]: time="2025-07-12T00:06:54.249888012Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:06:54.993095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427921402.mount: Deactivated successfully. Jul 12 00:06:56.283917 containerd[1693]: time="2025-07-12T00:06:56.283862934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.287484 containerd[1693]: time="2025-07-12T00:06:56.287447361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 12 00:06:56.291366 containerd[1693]: time="2025-07-12T00:06:56.291314150Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.296498 containerd[1693]: time="2025-07-12T00:06:56.296429269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.297733 containerd[1693]: time="2025-07-12T00:06:56.297596278Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.047669585s" Jul 12 00:06:56.297733 containerd[1693]: time="2025-07-12T00:06:56.297634158Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:06:56.298495 containerd[1693]: time="2025-07-12T00:06:56.298457044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:06:56.902615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016566350.mount: Deactivated successfully. Jul 12 00:06:56.945397 containerd[1693]: time="2025-07-12T00:06:56.945352474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.948944 containerd[1693]: time="2025-07-12T00:06:56.948915464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:06:56.958194 containerd[1693]: time="2025-07-12T00:06:56.958141259Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.970463 containerd[1693]: time="2025-07-12T00:06:56.970408480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.971192 containerd[1693]: time="2025-07-12T00:06:56.971150086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 672.655242ms" Jul 12 00:06:56.971192 containerd[1693]: time="2025-07-12T00:06:56.971187607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:06:56.971919 containerd[1693]: time="2025-07-12T00:06:56.971891932Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:06:58.526299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42967256.mount: Deactivated successfully. Jul 12 00:06:59.478091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:06:59.484431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:59.591331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:59.595611 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:59.714848 kubelet[2541]: E0712 00:06:59.714768 2541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:59.717453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:59.717721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:00.964228 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 12 00:07:01.022111 containerd[1693]: time="2025-07-12T00:07:01.022043702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:01.027280 containerd[1693]: time="2025-07-12T00:07:01.027232985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 12 00:07:01.033648 containerd[1693]: time="2025-07-12T00:07:01.033593717Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:01.044622 containerd[1693]: time="2025-07-12T00:07:01.044553167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:01.045951 containerd[1693]: time="2025-07-12T00:07:01.045799978Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.073873925s" Jul 12 00:07:01.045951 containerd[1693]: time="2025-07-12T00:07:01.045846418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:07:01.978546 update_engine[1662]: I20250712 00:07:01.977236 1662 update_attempter.cc:509] Updating boot flags... Jul 12 00:07:02.055230 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2586) Jul 12 00:07:06.660071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:06.665521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:06.696080 systemd[1]: Reloading requested from client PID 2620 ('systemctl') (unit session-9.scope)... Jul 12 00:07:06.696100 systemd[1]: Reloading... Jul 12 00:07:06.825245 zram_generator::config[2663]: No configuration found. Jul 12 00:07:06.942516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:07.021020 systemd[1]: Reloading finished in 324 ms. Jul 12 00:07:07.071750 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:07.075887 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:07.076279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:07.087448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:07.199131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:07.206076 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:07.327495 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:07.327495 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:07.327495 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:07.327851 kubelet[2729]: I0712 00:07:07.327547 2729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:09.142984 kubelet[2729]: I0712 00:07:09.142920 2729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:09.142984 kubelet[2729]: I0712 00:07:09.142962 2729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:09.143484 kubelet[2729]: I0712 00:07:09.143299 2729 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:09.165963 kubelet[2729]: E0712 00:07:09.165489 2729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.165963 kubelet[2729]: I0712 00:07:09.165510 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:09.174555 kubelet[2729]: E0712 00:07:09.174496 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:09.174555 kubelet[2729]: I0712 00:07:09.174555 2729 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:09.177584 kubelet[2729]: I0712 00:07:09.177554 2729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:09.177806 kubelet[2729]: I0712 00:07:09.177774 2729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:09.177992 kubelet[2729]: I0712 00:07:09.177807 2729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-ddca76aad7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:09.178075 kubelet[2729]: I0712 00:07:09.178001 2729 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:09.178075 kubelet[2729]: I0712 00:07:09.178011 2729 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:09.178169 kubelet[2729]: I0712 00:07:09.178147 2729 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:09.181362 kubelet[2729]: I0712 00:07:09.181336 2729 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:09.181426 kubelet[2729]: I0712 00:07:09.181368 2729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:09.181426 kubelet[2729]: I0712 00:07:09.181393 2729 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:09.181426 kubelet[2729]: I0712 00:07:09.181404 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:09.185958 kubelet[2729]: W0712 00:07:09.185841 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:09.186251 kubelet[2729]: E0712 00:07:09.185913 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.186962 kubelet[2729]: W0712 00:07:09.186840 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-ddca76aad7&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:09.186962 kubelet[2729]: E0712 00:07:09.186908 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-ddca76aad7&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.188251 kubelet[2729]: I0712 00:07:09.187251 2729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:09.188251 kubelet[2729]: I0712 00:07:09.187785 2729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:09.188251 kubelet[2729]: W0712 00:07:09.187849 2729 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:07:09.189019 kubelet[2729]: I0712 00:07:09.188992 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:09.189137 kubelet[2729]: I0712 00:07:09.189128 2729 server.go:1287] "Started kubelet" Jul 12 00:07:09.189991 kubelet[2729]: I0712 00:07:09.189938 2729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:09.191062 kubelet[2729]: I0712 00:07:09.191031 2729 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:09.192617 kubelet[2729]: I0712 00:07:09.192552 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:09.193038 kubelet[2729]: I0712 00:07:09.193018 2729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:09.193585 kubelet[2729]: E0712 00:07:09.193388 2729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-n-ddca76aad7.1851584fa9410305 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-n-ddca76aad7,UID:ci-4081.3.4-n-ddca76aad7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-n-ddca76aad7,},FirstTimestamp:2025-07-12 00:07:09.189104389 +0000 UTC m=+1.979947856,LastTimestamp:2025-07-12 00:07:09.189104389 +0000 UTC m=+1.979947856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-n-ddca76aad7,}" Jul 12 00:07:09.195609 kubelet[2729]: I0712 00:07:09.195585 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:09.196679 kubelet[2729]: I0712 00:07:09.196641 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:09.198959 kubelet[2729]: E0712 00:07:09.198923 2729 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:09.199521 kubelet[2729]: E0712 00:07:09.199499 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.199708 kubelet[2729]: I0712 00:07:09.199656 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:09.200027 kubelet[2729]: I0712 00:07:09.200008 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:09.200254 kubelet[2729]: I0712 00:07:09.200142 2729 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:09.200768 kubelet[2729]: W0712 00:07:09.200714 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:09.200989 kubelet[2729]: E0712 00:07:09.200909 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.201199 kubelet[2729]: I0712 00:07:09.201180 2729 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:09.201471 kubelet[2729]: I0712 00:07:09.201378 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:09.202680 kubelet[2729]: E0712 00:07:09.202651 2729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-ddca76aad7?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="200ms" Jul 12 00:07:09.202950 kubelet[2729]: I0712 00:07:09.202929 2729 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:09.215940 kubelet[2729]: I0712 00:07:09.215881 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:09.216917 kubelet[2729]: I0712 00:07:09.216890 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:09.216956 kubelet[2729]: I0712 00:07:09.216922 2729 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:09.216956 kubelet[2729]: I0712 00:07:09.216944 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:09.216956 kubelet[2729]: I0712 00:07:09.216951 2729 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:09.217020 kubelet[2729]: E0712 00:07:09.216994 2729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:09.223759 kubelet[2729]: W0712 00:07:09.223705 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:09.223900 kubelet[2729]: E0712 00:07:09.223770 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.300223 kubelet[2729]: E0712 00:07:09.300154 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.317302 kubelet[2729]: E0712 00:07:09.317231 2729 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:09.317302 kubelet[2729]: I0712 00:07:09.317266 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:09.317302 kubelet[2729]: I0712 00:07:09.317277 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:09.317302 kubelet[2729]: I0712 00:07:09.317301 2729 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:09.400705 kubelet[2729]: E0712 00:07:09.400585 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.405239 kubelet[2729]: E0712 00:07:09.404795 2729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-ddca76aad7?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="400ms" Jul 12 00:07:09.501695 kubelet[2729]: E0712 00:07:09.501645 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.517913 kubelet[2729]: E0712 00:07:09.517877 2729 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:09.602284 kubelet[2729]: E0712 00:07:09.602244 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.747468 kubelet[2729]: E0712 00:07:09.703087 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.803574 kubelet[2729]: E0712 00:07:09.803524 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.806059 kubelet[2729]: E0712 00:07:09.806023 2729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-ddca76aad7?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="800ms" Jul 12 00:07:09.852967 kubelet[2729]: I0712 00:07:09.852869 2729 policy_none.go:49] "None policy: Start" Jul 12 00:07:09.852967 kubelet[2729]: I0712 00:07:09.852904 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:09.852967 kubelet[2729]: I0712 00:07:09.852917 2729 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:09.904572 kubelet[2729]: E0712 00:07:09.904485 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:09.918739 kubelet[2729]: E0712 00:07:09.918705 2729 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:10.004249 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:07:10.004560 kubelet[2729]: E0712 00:07:10.004543 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:10.018633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:07:10.022040 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:07:10.030474 kubelet[2729]: W0712 00:07:10.030414 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:10.030607 kubelet[2729]: E0712 00:07:10.030487 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:10.032862 kubelet[2729]: I0712 00:07:10.032137 2729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:10.032862 kubelet[2729]: I0712 00:07:10.032361 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:10.032862 kubelet[2729]: I0712 00:07:10.032373 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:10.032862 kubelet[2729]: I0712 00:07:10.032611 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:10.034430 kubelet[2729]: E0712 00:07:10.034395 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:10.034563 kubelet[2729]: E0712 00:07:10.034455 2729 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:10.134771 kubelet[2729]: I0712 00:07:10.134714 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.135157 kubelet[2729]: E0712 00:07:10.135107 2729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.337334 kubelet[2729]: I0712 00:07:10.337115 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.337651 kubelet[2729]: E0712 00:07:10.337523 2729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.425694 kubelet[2729]: W0712 00:07:10.425583 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:10.425694 kubelet[2729]: E0712 00:07:10.425660 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:10.552998 kubelet[2729]: W0712 00:07:10.552937 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-ddca76aad7&limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:10.552998 kubelet[2729]: E0712 00:07:10.553004 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-ddca76aad7&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:10.607405 kubelet[2729]: E0712 00:07:10.607233 2729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-ddca76aad7?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="1.6s" Jul 12 00:07:10.690377 kubelet[2729]: W0712 00:07:10.690303 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:10.690377 kubelet[2729]: E0712 00:07:10.690382 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:10.730920 systemd[1]: Created slice kubepods-burstable-podfc7c379abb93e6d213dc233c4b01b090.slice - libcontainer container kubepods-burstable-podfc7c379abb93e6d213dc233c4b01b090.slice. Jul 12 00:07:10.739880 kubelet[2729]: I0712 00:07:10.739815 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.740150 kubelet[2729]: E0712 00:07:10.740119 2729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.747142 kubelet[2729]: E0712 00:07:10.747106 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.750909 systemd[1]: Created slice kubepods-burstable-podae2f24146be9f9f2120ec6f73a886c80.slice - libcontainer container kubepods-burstable-podae2f24146be9f9f2120ec6f73a886c80.slice. Jul 12 00:07:10.753045 kubelet[2729]: E0712 00:07:10.752998 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.762269 systemd[1]: Created slice kubepods-burstable-podb93a9961045bdf2c001eb2b070157888.slice - libcontainer container kubepods-burstable-podb93a9961045bdf2c001eb2b070157888.slice. Jul 12 00:07:10.764150 kubelet[2729]: E0712 00:07:10.764121 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809488 kubelet[2729]: I0712 00:07:10.809442 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809488 kubelet[2729]: I0712 00:07:10.809487 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809488 kubelet[2729]: I0712 00:07:10.809516 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809488 kubelet[2729]: I0712 00:07:10.809533 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809488 kubelet[2729]: I0712 00:07:10.809554 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809862 kubelet[2729]: I0712 00:07:10.809571 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809862 kubelet[2729]: I0712 00:07:10.809589 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809862 kubelet[2729]: I0712 00:07:10.809605 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b93a9961045bdf2c001eb2b070157888-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-ddca76aad7\" (UID: \"b93a9961045bdf2c001eb2b070157888\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:10.809862 kubelet[2729]: I0712 00:07:10.809621 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:11.048924 containerd[1693]: time="2025-07-12T00:07:11.048875729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-ddca76aad7,Uid:fc7c379abb93e6d213dc233c4b01b090,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:11.054759 containerd[1693]: time="2025-07-12T00:07:11.054507015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-ddca76aad7,Uid:ae2f24146be9f9f2120ec6f73a886c80,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:11.065910 containerd[1693]: time="2025-07-12T00:07:11.065656108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-ddca76aad7,Uid:b93a9961045bdf2c001eb2b070157888,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:11.253328 kubelet[2729]: E0712 00:07:11.253250 2729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:11.542181 kubelet[2729]: I0712 00:07:11.541923 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:11.542566 kubelet[2729]: E0712 00:07:11.542247 2729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:11.838957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705396835.mount: Deactivated successfully. Jul 12 00:07:11.899574 kubelet[2729]: W0712 00:07:11.899494 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:11.899574 kubelet[2729]: E0712 00:07:11.899542 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:11.903234 containerd[1693]: time="2025-07-12T00:07:11.903159914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:11.907071 containerd[1693]: time="2025-07-12T00:07:11.907017346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:07:11.914521 containerd[1693]: time="2025-07-12T00:07:11.914468607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:11.921592 containerd[1693]: time="2025-07-12T00:07:11.920728739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:11.926783 containerd[1693]: time="2025-07-12T00:07:11.926739789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:11.933652 containerd[1693]: time="2025-07-12T00:07:11.932559677Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:11.935765 containerd[1693]: time="2025-07-12T00:07:11.935511221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:11.942488 containerd[1693]: time="2025-07-12T00:07:11.942430919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:11.943479 containerd[1693]: time="2025-07-12T00:07:11.943241445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 894.275476ms" Jul 12 00:07:11.949432 containerd[1693]: time="2025-07-12T00:07:11.949366296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 883.632428ms" Jul 12 00:07:11.950860 containerd[1693]: time="2025-07-12T00:07:11.950578226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 895.99885ms" Jul 12 00:07:12.208105 kubelet[2729]: E0712 00:07:12.207978 2729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-ddca76aad7?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="3.2s" Jul 12 00:07:12.489547 containerd[1693]: time="2025-07-12T00:07:12.488934758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:12.489547 containerd[1693]: time="2025-07-12T00:07:12.488995119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:12.489547 containerd[1693]: time="2025-07-12T00:07:12.489012319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.489547 containerd[1693]: time="2025-07-12T00:07:12.489096079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.499771 containerd[1693]: time="2025-07-12T00:07:12.499580566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:12.499771 containerd[1693]: time="2025-07-12T00:07:12.499690807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:12.500324 containerd[1693]: time="2025-07-12T00:07:12.499926369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.502433 containerd[1693]: time="2025-07-12T00:07:12.501075498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.502433 containerd[1693]: time="2025-07-12T00:07:12.500648495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:12.502433 containerd[1693]: time="2025-07-12T00:07:12.500712895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:12.502433 containerd[1693]: time="2025-07-12T00:07:12.500732456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.502433 containerd[1693]: time="2025-07-12T00:07:12.500810656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:12.524509 systemd[1]: Started cri-containerd-657102ee9e30c6cd9f21f26ccbadeeda1552da7449c9ae8d809953dddbd68213.scope - libcontainer container 657102ee9e30c6cd9f21f26ccbadeeda1552da7449c9ae8d809953dddbd68213. Jul 12 00:07:12.526632 systemd[1]: Started cri-containerd-b1066540a300fb9c16fbf801f7a4bb55b5e0479aa11b3e345fe87098b230352f.scope - libcontainer container b1066540a300fb9c16fbf801f7a4bb55b5e0479aa11b3e345fe87098b230352f. Jul 12 00:07:12.529969 systemd[1]: Started cri-containerd-ef708292bafd010d3bbbfccf5c154ee948745c9074e7567fd2ef208a6c8b5504.scope - libcontainer container ef708292bafd010d3bbbfccf5c154ee948745c9074e7567fd2ef208a6c8b5504. Jul 12 00:07:12.582410 containerd[1693]: time="2025-07-12T00:07:12.582372747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-ddca76aad7,Uid:ae2f24146be9f9f2120ec6f73a886c80,Namespace:kube-system,Attempt:0,} returns sandbox id \"657102ee9e30c6cd9f21f26ccbadeeda1552da7449c9ae8d809953dddbd68213\"" Jul 12 00:07:12.582937 containerd[1693]: time="2025-07-12T00:07:12.582914111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-ddca76aad7,Uid:fc7c379abb93e6d213dc233c4b01b090,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1066540a300fb9c16fbf801f7a4bb55b5e0479aa11b3e345fe87098b230352f\"" Jul 12 00:07:12.588774 containerd[1693]: time="2025-07-12T00:07:12.588724354Z" level=info msg="CreateContainer within sandbox \"b1066540a300fb9c16fbf801f7a4bb55b5e0479aa11b3e345fe87098b230352f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:07:12.589166 containerd[1693]: time="2025-07-12T00:07:12.588918196Z" level=info msg="CreateContainer within sandbox \"657102ee9e30c6cd9f21f26ccbadeeda1552da7449c9ae8d809953dddbd68213\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:07:12.599705 containerd[1693]: time="2025-07-12T00:07:12.599299713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-ddca76aad7,Uid:b93a9961045bdf2c001eb2b070157888,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef708292bafd010d3bbbfccf5c154ee948745c9074e7567fd2ef208a6c8b5504\"" Jul 12 00:07:12.601826 containerd[1693]: time="2025-07-12T00:07:12.601789851Z" level=info msg="CreateContainer within sandbox \"ef708292bafd010d3bbbfccf5c154ee948745c9074e7567fd2ef208a6c8b5504\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:07:12.688883 containerd[1693]: time="2025-07-12T00:07:12.688831338Z" level=info msg="CreateContainer within sandbox \"b1066540a300fb9c16fbf801f7a4bb55b5e0479aa11b3e345fe87098b230352f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f28bcb8f5c5c5cab45490aa05a83134359bdb2c06479051f0479b2ea8213dafe\"" Jul 12 00:07:12.689708 containerd[1693]: time="2025-07-12T00:07:12.689655104Z" level=info msg="StartContainer for \"f28bcb8f5c5c5cab45490aa05a83134359bdb2c06479051f0479b2ea8213dafe\"" Jul 12 00:07:12.713944 containerd[1693]: time="2025-07-12T00:07:12.713571922Z" level=info msg="CreateContainer within sandbox \"657102ee9e30c6cd9f21f26ccbadeeda1552da7449c9ae8d809953dddbd68213\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"103824291ac26715016013e0ddbc8e5a00f94bcd4f73e98aef2ed83801143cfb\"" Jul 12 00:07:12.714375 systemd[1]: Started cri-containerd-f28bcb8f5c5c5cab45490aa05a83134359bdb2c06479051f0479b2ea8213dafe.scope - libcontainer container f28bcb8f5c5c5cab45490aa05a83134359bdb2c06479051f0479b2ea8213dafe. Jul 12 00:07:12.715387 containerd[1693]: time="2025-07-12T00:07:12.714926612Z" level=info msg="StartContainer for \"103824291ac26715016013e0ddbc8e5a00f94bcd4f73e98aef2ed83801143cfb\"" Jul 12 00:07:12.727158 containerd[1693]: time="2025-07-12T00:07:12.727023862Z" level=info msg="CreateContainer within sandbox \"ef708292bafd010d3bbbfccf5c154ee948745c9074e7567fd2ef208a6c8b5504\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"858ab9165f5f9b956555457a2ab51849a2f20120ba0326a0693a3a9f45595773\"" Jul 12 00:07:12.729046 containerd[1693]: time="2025-07-12T00:07:12.728584273Z" level=info msg="StartContainer for \"858ab9165f5f9b956555457a2ab51849a2f20120ba0326a0693a3a9f45595773\"" Jul 12 00:07:12.741591 kubelet[2729]: W0712 00:07:12.741251 2729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.43:6443: connect: connection refused Jul 12 00:07:12.741591 kubelet[2729]: E0712 00:07:12.741327 2729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:12.753697 systemd[1]: Started cri-containerd-103824291ac26715016013e0ddbc8e5a00f94bcd4f73e98aef2ed83801143cfb.scope - libcontainer container 103824291ac26715016013e0ddbc8e5a00f94bcd4f73e98aef2ed83801143cfb. Jul 12 00:07:12.774714 systemd[1]: Started cri-containerd-858ab9165f5f9b956555457a2ab51849a2f20120ba0326a0693a3a9f45595773.scope - libcontainer container 858ab9165f5f9b956555457a2ab51849a2f20120ba0326a0693a3a9f45595773. Jul 12 00:07:12.787281 containerd[1693]: time="2025-07-12T00:07:12.787101788Z" level=info msg="StartContainer for \"f28bcb8f5c5c5cab45490aa05a83134359bdb2c06479051f0479b2ea8213dafe\" returns successfully" Jul 12 00:07:12.811275 containerd[1693]: time="2025-07-12T00:07:12.811225287Z" level=info msg="StartContainer for \"103824291ac26715016013e0ddbc8e5a00f94bcd4f73e98aef2ed83801143cfb\" returns successfully" Jul 12 00:07:12.879096 containerd[1693]: time="2025-07-12T00:07:12.879033871Z" level=info msg="StartContainer for \"858ab9165f5f9b956555457a2ab51849a2f20120ba0326a0693a3a9f45595773\" returns successfully" Jul 12 00:07:13.146167 kubelet[2729]: I0712 00:07:13.145712 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:13.237788 kubelet[2729]: E0712 00:07:13.237751 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:13.242666 kubelet[2729]: E0712 00:07:13.242631 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:13.244310 kubelet[2729]: E0712 00:07:13.244284 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:14.247739 kubelet[2729]: E0712 00:07:14.247700 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:14.248086 kubelet[2729]: E0712 00:07:14.248006 2729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.562857 kubelet[2729]: E0712 00:07:15.562801 2729 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-n-ddca76aad7\" not found" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.601000 kubelet[2729]: I0712 00:07:15.600960 2729 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.602328 kubelet[2729]: I0712 00:07:15.602302 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.873062 kubelet[2729]: E0712 00:07:15.872920 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.873062 kubelet[2729]: I0712 00:07:15.872955 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.875751 kubelet[2729]: E0712 00:07:15.875708 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.875751 kubelet[2729]: I0712 00:07:15.875745 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.877608 kubelet[2729]: E0712 00:07:15.877574 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-n-ddca76aad7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.926935 kubelet[2729]: I0712 00:07:15.926879 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:15.928960 kubelet[2729]: E0712 00:07:15.928927 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:16.188438 kubelet[2729]: I0712 00:07:16.188250 2729 apiserver.go:52] "Watching apiserver" Jul 12 00:07:16.200814 kubelet[2729]: I0712 00:07:16.200763 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:16.617629 kubelet[2729]: I0712 00:07:16.617433 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:16.626294 kubelet[2729]: W0712 00:07:16.626255 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:17.634846 kubelet[2729]: I0712 00:07:17.634810 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:17.642501 kubelet[2729]: W0712 00:07:17.642471 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:17.741187 systemd[1]: Reloading requested from client PID 3004 ('systemctl') (unit session-9.scope)... Jul 12 00:07:17.741218 systemd[1]: Reloading... Jul 12 00:07:17.824386 zram_generator::config[3043]: No configuration found. Jul 12 00:07:17.927907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:18.017624 systemd[1]: Reloading finished in 276 ms. Jul 12 00:07:18.054974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:18.068588 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:18.068948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.069082 systemd[1]: kubelet.service: Consumed 1.924s CPU time, 128.1M memory peak, 0B memory swap peak. Jul 12 00:07:18.083038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:18.181185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:18.193608 (kubelet)[3108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:18.233853 kubelet[3108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:18.233853 kubelet[3108]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:18.233853 kubelet[3108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:18.234229 kubelet[3108]: I0712 00:07:18.233912 3108 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:18.241177 kubelet[3108]: I0712 00:07:18.241138 3108 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:18.241177 kubelet[3108]: I0712 00:07:18.241169 3108 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:18.241463 kubelet[3108]: I0712 00:07:18.241443 3108 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:18.242734 kubelet[3108]: I0712 00:07:18.242716 3108 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:07:18.245239 kubelet[3108]: I0712 00:07:18.245042 3108 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:18.251304 kubelet[3108]: E0712 00:07:18.251262 3108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:18.251418 kubelet[3108]: I0712 00:07:18.251312 3108 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:18.254690 kubelet[3108]: I0712 00:07:18.254637 3108 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:18.254875 kubelet[3108]: I0712 00:07:18.254840 3108 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:18.255049 kubelet[3108]: I0712 00:07:18.254867 3108 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-ddca76aad7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:18.255049 kubelet[3108]: I0712 00:07:18.255048 3108 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:18.255167 kubelet[3108]: I0712 00:07:18.255057 3108 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:18.255167 kubelet[3108]: I0712 00:07:18.255096 3108 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:18.259135 kubelet[3108]: I0712 00:07:18.258483 3108 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:18.259135 kubelet[3108]: I0712 00:07:18.258519 3108 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:18.259135 kubelet[3108]: I0712 00:07:18.258542 3108 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:18.259135 kubelet[3108]: I0712 00:07:18.258554 3108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:18.265734 kubelet[3108]: I0712 00:07:18.265699 3108 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:18.268220 kubelet[3108]: I0712 00:07:18.266381 3108 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:18.271077 kubelet[3108]: I0712 00:07:18.270895 3108 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:18.271077 kubelet[3108]: I0712 00:07:18.270932 3108 server.go:1287] "Started kubelet" Jul 12 00:07:18.274414 kubelet[3108]: I0712 00:07:18.274396 3108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:18.277838 kubelet[3108]: I0712 00:07:18.276935 3108 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:18.279258 kubelet[3108]: I0712 00:07:18.279187 3108 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.282128 3108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.282364 3108 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.282546 3108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.283374 3108 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:18.285741 kubelet[3108]: E0712 00:07:18.283551 3108 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-ddca76aad7\" not found" Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.283720 3108 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:18.285741 kubelet[3108]: I0712 00:07:18.283827 3108 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:18.295991 kubelet[3108]: I0712 00:07:18.295959 3108 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:18.296430 kubelet[3108]: I0712 00:07:18.296400 3108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:18.301294 kubelet[3108]: I0712 00:07:18.301107 3108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:18.302461 kubelet[3108]: I0712 00:07:18.302428 3108 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:18.304670 kubelet[3108]: I0712 00:07:18.304647 3108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:18.305138 kubelet[3108]: I0712 00:07:18.304765 3108 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:18.305138 kubelet[3108]: I0712 00:07:18.304790 3108 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:18.305138 kubelet[3108]: I0712 00:07:18.304797 3108 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:18.305138 kubelet[3108]: E0712 00:07:18.304833 3108 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:18.376690 kubelet[3108]: I0712 00:07:18.376629 3108 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:18.376690 kubelet[3108]: I0712 00:07:18.376651 3108 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:18.377319 kubelet[3108]: I0712 00:07:18.376801 3108 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378250 3108 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378269 3108 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378287 3108 policy_none.go:49] "None policy: Start" Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378296 3108 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378306 3108 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:18.378911 kubelet[3108]: I0712 00:07:18.378405 3108 state_mem.go:75] "Updated machine memory state" Jul 12 00:07:18.384314 kubelet[3108]: I0712 00:07:18.384286 3108 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:18.384796 kubelet[3108]: I0712 00:07:18.384781 3108 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:18.384910 kubelet[3108]: I0712 00:07:18.384879 3108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:18.385164 kubelet[3108]: I0712 00:07:18.385148 3108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:18.389493 kubelet[3108]: E0712 00:07:18.389452 3108 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:18.405848 kubelet[3108]: I0712 00:07:18.405806 3108 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.406420 kubelet[3108]: I0712 00:07:18.406343 3108 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.407347 kubelet[3108]: I0712 00:07:18.407332 3108 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.424799 kubelet[3108]: W0712 00:07:18.424764 3108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:18.424799 kubelet[3108]: E0712 00:07:18.424824 3108 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.425515 kubelet[3108]: W0712 00:07:18.425483 3108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:18.426352 kubelet[3108]: W0712 00:07:18.426305 3108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:18.426352 kubelet[3108]: E0712 00:07:18.426348 3108 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-n-ddca76aad7\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485576 kubelet[3108]: I0712 00:07:18.485316 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485576 kubelet[3108]: I0712 00:07:18.485371 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485576 kubelet[3108]: I0712 00:07:18.485391 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485576 kubelet[3108]: I0712 00:07:18.485413 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485576 kubelet[3108]: I0712 00:07:18.485430 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485820 kubelet[3108]: I0712 00:07:18.485446 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485820 kubelet[3108]: I0712 00:07:18.485462 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae2f24146be9f9f2120ec6f73a886c80-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-ddca76aad7\" (UID: \"ae2f24146be9f9f2120ec6f73a886c80\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485820 kubelet[3108]: I0712 00:07:18.485477 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b93a9961045bdf2c001eb2b070157888-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-ddca76aad7\" (UID: \"b93a9961045bdf2c001eb2b070157888\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.485820 kubelet[3108]: I0712 00:07:18.485491 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc7c379abb93e6d213dc233c4b01b090-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" (UID: \"fc7c379abb93e6d213dc233c4b01b090\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.494571 kubelet[3108]: I0712 00:07:18.494536 3108 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.508107 kubelet[3108]: I0712 00:07:18.507840 3108 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:18.508107 kubelet[3108]: I0712 00:07:18.507928 3108 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:19.261930 kubelet[3108]: I0712 00:07:19.261885 3108 apiserver.go:52] "Watching apiserver" Jul 12 00:07:19.284322 kubelet[3108]: I0712 00:07:19.284262 3108 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:19.344396 kubelet[3108]: I0712 00:07:19.343681 3108 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:19.344811 kubelet[3108]: I0712 00:07:19.344663 3108 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:19.355454 kubelet[3108]: W0712 00:07:19.355139 3108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:19.355454 kubelet[3108]: E0712 00:07:19.355190 3108 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-n-ddca76aad7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:19.356752 kubelet[3108]: W0712 00:07:19.356736 3108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:19.356912 kubelet[3108]: E0712 00:07:19.356868 3108 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-n-ddca76aad7\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" Jul 12 00:07:19.379812 kubelet[3108]: I0712 00:07:19.379725 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-ddca76aad7" podStartSLOduration=1.3797102909999999 podStartE2EDuration="1.379710291s" podCreationTimestamp="2025-07-12 00:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:19.369081572 +0000 UTC m=+1.172295591" watchObservedRunningTime="2025-07-12 00:07:19.379710291 +0000 UTC m=+1.182924270" Jul 12 00:07:19.394923 kubelet[3108]: I0712 00:07:19.394748 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-n-ddca76aad7" podStartSLOduration=2.394732443 podStartE2EDuration="2.394732443s" podCreationTimestamp="2025-07-12 00:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:19.380037254 +0000 UTC m=+1.183251233" watchObservedRunningTime="2025-07-12 00:07:19.394732443 +0000 UTC m=+1.197946462" Jul 12 00:07:19.407384 kubelet[3108]: I0712 00:07:19.407324 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-n-ddca76aad7" podStartSLOduration=3.407306896 podStartE2EDuration="3.407306896s" podCreationTimestamp="2025-07-12 00:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:19.394981885 +0000 UTC m=+1.198195904" watchObservedRunningTime="2025-07-12 00:07:19.407306896 +0000 UTC m=+1.210520915" Jul 12 00:07:23.056525 kubelet[3108]: I0712 00:07:23.056420 3108 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:07:23.057500 containerd[1693]: time="2025-07-12T00:07:23.057337146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:07:23.057811 kubelet[3108]: I0712 00:07:23.057552 3108 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:07:23.979107 systemd[1]: Created slice kubepods-besteffort-pod45b785bb_145e_42da_b478_c70926d06b4d.slice - libcontainer container kubepods-besteffort-pod45b785bb_145e_42da_b478_c70926d06b4d.slice. Jul 12 00:07:24.023461 kubelet[3108]: I0712 00:07:24.023417 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45b785bb-145e-42da-b478-c70926d06b4d-kube-proxy\") pod \"kube-proxy-sfz57\" (UID: \"45b785bb-145e-42da-b478-c70926d06b4d\") " pod="kube-system/kube-proxy-sfz57" Jul 12 00:07:24.023461 kubelet[3108]: I0712 00:07:24.023463 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5ts\" (UniqueName: \"kubernetes.io/projected/45b785bb-145e-42da-b478-c70926d06b4d-kube-api-access-9q5ts\") pod \"kube-proxy-sfz57\" (UID: \"45b785bb-145e-42da-b478-c70926d06b4d\") " pod="kube-system/kube-proxy-sfz57" Jul 12 00:07:24.023627 kubelet[3108]: I0712 00:07:24.023485 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45b785bb-145e-42da-b478-c70926d06b4d-xtables-lock\") pod \"kube-proxy-sfz57\" (UID: \"45b785bb-145e-42da-b478-c70926d06b4d\") " pod="kube-system/kube-proxy-sfz57" Jul 12 00:07:24.023627 kubelet[3108]: I0712 00:07:24.023500 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45b785bb-145e-42da-b478-c70926d06b4d-lib-modules\") pod \"kube-proxy-sfz57\" (UID: \"45b785bb-145e-42da-b478-c70926d06b4d\") " pod="kube-system/kube-proxy-sfz57" Jul 12 00:07:24.109060 systemd[1]: Created slice kubepods-besteffort-poda29c8a27_0962_4443_9677_9e08b513a31e.slice - libcontainer container kubepods-besteffort-poda29c8a27_0962_4443_9677_9e08b513a31e.slice. Jul 12 00:07:24.125193 kubelet[3108]: I0712 00:07:24.124331 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wxs\" (UniqueName: \"kubernetes.io/projected/a29c8a27-0962-4443-9677-9e08b513a31e-kube-api-access-z2wxs\") pod \"tigera-operator-747864d56d-lzw4p\" (UID: \"a29c8a27-0962-4443-9677-9e08b513a31e\") " pod="tigera-operator/tigera-operator-747864d56d-lzw4p" Jul 12 00:07:24.125193 kubelet[3108]: I0712 00:07:24.124381 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a29c8a27-0962-4443-9677-9e08b513a31e-var-lib-calico\") pod \"tigera-operator-747864d56d-lzw4p\" (UID: \"a29c8a27-0962-4443-9677-9e08b513a31e\") " pod="tigera-operator/tigera-operator-747864d56d-lzw4p" Jul 12 00:07:24.288251 containerd[1693]: time="2025-07-12T00:07:24.288056337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sfz57,Uid:45b785bb-145e-42da-b478-c70926d06b4d,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:24.347681 containerd[1693]: time="2025-07-12T00:07:24.347381036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:24.347930 containerd[1693]: time="2025-07-12T00:07:24.347610158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:24.347930 containerd[1693]: time="2025-07-12T00:07:24.347815519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:24.348502 containerd[1693]: time="2025-07-12T00:07:24.348456925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:24.375043 systemd[1]: Started cri-containerd-2aa1355791cf8e16104d99f0e7b4ab5c4289590f279f24e6b1f263cb5e22b67b.scope - libcontainer container 2aa1355791cf8e16104d99f0e7b4ab5c4289590f279f24e6b1f263cb5e22b67b. Jul 12 00:07:24.398633 containerd[1693]: time="2025-07-12T00:07:24.398597747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sfz57,Uid:45b785bb-145e-42da-b478-c70926d06b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aa1355791cf8e16104d99f0e7b4ab5c4289590f279f24e6b1f263cb5e22b67b\"" Jul 12 00:07:24.403229 containerd[1693]: time="2025-07-12T00:07:24.402773982Z" level=info msg="CreateContainer within sandbox \"2aa1355791cf8e16104d99f0e7b4ab5c4289590f279f24e6b1f263cb5e22b67b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:07:24.413236 containerd[1693]: time="2025-07-12T00:07:24.413191749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lzw4p,Uid:a29c8a27-0962-4443-9677-9e08b513a31e,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:07:24.498602 containerd[1693]: time="2025-07-12T00:07:24.498552707Z" level=info msg="CreateContainer within sandbox \"2aa1355791cf8e16104d99f0e7b4ab5c4289590f279f24e6b1f263cb5e22b67b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d2b4a45887ad672e0e72a13044dce091e3fe702cd5a9acca938be0964e69be37\"" Jul 12 00:07:24.499216 containerd[1693]: time="2025-07-12T00:07:24.499178912Z" level=info msg="StartContainer for \"d2b4a45887ad672e0e72a13044dce091e3fe702cd5a9acca938be0964e69be37\"" Jul 12 00:07:24.530575 containerd[1693]: time="2025-07-12T00:07:24.530386895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:24.530575 containerd[1693]: time="2025-07-12T00:07:24.530536536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:24.530884 containerd[1693]: time="2025-07-12T00:07:24.530548816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:24.531088 containerd[1693]: time="2025-07-12T00:07:24.530928659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:24.538526 systemd[1]: Started cri-containerd-d2b4a45887ad672e0e72a13044dce091e3fe702cd5a9acca938be0964e69be37.scope - libcontainer container d2b4a45887ad672e0e72a13044dce091e3fe702cd5a9acca938be0964e69be37. Jul 12 00:07:24.553414 systemd[1]: Started cri-containerd-d0c083c7d1d5b5ed35edfce7c817211dcff41ae6c0e67ccf90c39064374c3d7a.scope - libcontainer container d0c083c7d1d5b5ed35edfce7c817211dcff41ae6c0e67ccf90c39064374c3d7a. Jul 12 00:07:24.582450 containerd[1693]: time="2025-07-12T00:07:24.582070290Z" level=info msg="StartContainer for \"d2b4a45887ad672e0e72a13044dce091e3fe702cd5a9acca938be0964e69be37\" returns successfully" Jul 12 00:07:24.593500 containerd[1693]: time="2025-07-12T00:07:24.593090742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lzw4p,Uid:a29c8a27-0962-4443-9677-9e08b513a31e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0c083c7d1d5b5ed35edfce7c817211dcff41ae6c0e67ccf90c39064374c3d7a\"" Jul 12 00:07:24.596389 containerd[1693]: time="2025-07-12T00:07:24.596270169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:07:26.642246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307192621.mount: Deactivated successfully. Jul 12 00:07:27.630782 kubelet[3108]: I0712 00:07:27.630708 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sfz57" podStartSLOduration=4.630675109 podStartE2EDuration="4.630675109s" podCreationTimestamp="2025-07-12 00:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:25.376900974 +0000 UTC m=+7.180114993" watchObservedRunningTime="2025-07-12 00:07:27.630675109 +0000 UTC m=+9.433889088" Jul 12 00:07:28.111309 containerd[1693]: time="2025-07-12T00:07:28.111246613Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.118755 containerd[1693]: time="2025-07-12T00:07:28.118708751Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:07:28.132238 containerd[1693]: time="2025-07-12T00:07:28.130562043Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.146383 containerd[1693]: time="2025-07-12T00:07:28.146326126Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.146945 containerd[1693]: time="2025-07-12T00:07:28.146916931Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 3.550606642s" Jul 12 00:07:28.147039 containerd[1693]: time="2025-07-12T00:07:28.147022692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:07:28.150399 containerd[1693]: time="2025-07-12T00:07:28.150360838Z" level=info msg="CreateContainer within sandbox \"d0c083c7d1d5b5ed35edfce7c817211dcff41ae6c0e67ccf90c39064374c3d7a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:07:28.185932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988653526.mount: Deactivated successfully. Jul 12 00:07:28.202439 containerd[1693]: time="2025-07-12T00:07:28.202390443Z" level=info msg="CreateContainer within sandbox \"d0c083c7d1d5b5ed35edfce7c817211dcff41ae6c0e67ccf90c39064374c3d7a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7fd83a35bfe75e1ee972323d034f67792a738e9b2652b142d9dce46fea1e01d3\"" Jul 12 00:07:28.203452 containerd[1693]: time="2025-07-12T00:07:28.203419371Z" level=info msg="StartContainer for \"7fd83a35bfe75e1ee972323d034f67792a738e9b2652b142d9dce46fea1e01d3\"" Jul 12 00:07:28.234480 systemd[1]: Started cri-containerd-7fd83a35bfe75e1ee972323d034f67792a738e9b2652b142d9dce46fea1e01d3.scope - libcontainer container 7fd83a35bfe75e1ee972323d034f67792a738e9b2652b142d9dce46fea1e01d3. Jul 12 00:07:28.262644 containerd[1693]: time="2025-07-12T00:07:28.262591752Z" level=info msg="StartContainer for \"7fd83a35bfe75e1ee972323d034f67792a738e9b2652b142d9dce46fea1e01d3\" returns successfully" Jul 12 00:07:28.400281 kubelet[3108]: I0712 00:07:28.399785 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lzw4p" podStartSLOduration=0.846070594 podStartE2EDuration="4.399764461s" podCreationTimestamp="2025-07-12 00:07:24 +0000 UTC" firstStartedPulling="2025-07-12 00:07:24.594729316 +0000 UTC m=+6.397943335" lastFinishedPulling="2025-07-12 00:07:28.148423183 +0000 UTC m=+9.951637202" observedRunningTime="2025-07-12 00:07:28.384539502 +0000 UTC m=+10.187753521" watchObservedRunningTime="2025-07-12 00:07:28.399764461 +0000 UTC m=+10.202978480" Jul 12 00:07:34.102470 sudo[2193]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:34.185426 sshd[2190]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:34.189233 systemd[1]: sshd@6-10.200.20.43:22-10.200.16.10:53990.service: Deactivated successfully. Jul 12 00:07:34.192827 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:07:34.194274 systemd[1]: session-9.scope: Consumed 6.633s CPU time, 150.6M memory peak, 0B memory swap peak. Jul 12 00:07:34.196279 systemd-logind[1659]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:07:34.197633 systemd-logind[1659]: Removed session 9. Jul 12 00:07:41.040747 systemd[1]: Created slice kubepods-besteffort-podc3d6c5cd_2771_49f5_9a5d_1ac15afe0b50.slice - libcontainer container kubepods-besteffort-podc3d6c5cd_2771_49f5_9a5d_1ac15afe0b50.slice. Jul 12 00:07:41.125569 kubelet[3108]: I0712 00:07:41.125469 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50-kube-api-access-sxnr9\") pod \"calico-typha-658b78d7b9-r9f62\" (UID: \"c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50\") " pod="calico-system/calico-typha-658b78d7b9-r9f62" Jul 12 00:07:41.125569 kubelet[3108]: I0712 00:07:41.125514 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50-tigera-ca-bundle\") pod \"calico-typha-658b78d7b9-r9f62\" (UID: \"c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50\") " pod="calico-system/calico-typha-658b78d7b9-r9f62" Jul 12 00:07:41.125569 kubelet[3108]: I0712 00:07:41.125533 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50-typha-certs\") pod \"calico-typha-658b78d7b9-r9f62\" (UID: \"c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50\") " pod="calico-system/calico-typha-658b78d7b9-r9f62" Jul 12 00:07:41.165986 systemd[1]: Created slice kubepods-besteffort-pod5ee6e301_a790_423f_8822_0c62ca556e29.slice - libcontainer container kubepods-besteffort-pod5ee6e301_a790_423f_8822_0c62ca556e29.slice. Jul 12 00:07:41.226709 kubelet[3108]: I0712 00:07:41.226087 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-cni-log-dir\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.226709 kubelet[3108]: I0712 00:07:41.226134 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-var-run-calico\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.226709 kubelet[3108]: I0712 00:07:41.226170 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ee6e301-a790-423f-8822-0c62ca556e29-node-certs\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.226709 kubelet[3108]: I0712 00:07:41.226213 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-cni-bin-dir\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.226709 kubelet[3108]: I0712 00:07:41.226246 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ee6e301-a790-423f-8822-0c62ca556e29-tigera-ca-bundle\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.227033 kubelet[3108]: I0712 00:07:41.226269 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-lib-modules\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.227033 kubelet[3108]: I0712 00:07:41.226287 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-cni-net-dir\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.227033 kubelet[3108]: I0712 00:07:41.226303 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-flexvol-driver-host\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.227033 kubelet[3108]: I0712 00:07:41.226325 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-var-lib-calico\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.227033 kubelet[3108]: I0712 00:07:41.226354 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-policysync\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.228288 kubelet[3108]: I0712 00:07:41.226369 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ee6e301-a790-423f-8822-0c62ca556e29-xtables-lock\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.228288 kubelet[3108]: I0712 00:07:41.226386 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgn5\" (UniqueName: \"kubernetes.io/projected/5ee6e301-a790-423f-8822-0c62ca556e29-kube-api-access-ckgn5\") pod \"calico-node-rs5t6\" (UID: \"5ee6e301-a790-423f-8822-0c62ca556e29\") " pod="calico-system/calico-node-rs5t6" Jul 12 00:07:41.300876 kubelet[3108]: E0712 00:07:41.299810 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:41.327982 kubelet[3108]: I0712 00:07:41.327432 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/058503a3-83aa-47a4-b834-2e39d5989b2c-varrun\") pod \"csi-node-driver-9q7pd\" (UID: \"058503a3-83aa-47a4-b834-2e39d5989b2c\") " pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:41.327982 kubelet[3108]: I0712 00:07:41.327542 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/058503a3-83aa-47a4-b834-2e39d5989b2c-kubelet-dir\") pod \"csi-node-driver-9q7pd\" (UID: \"058503a3-83aa-47a4-b834-2e39d5989b2c\") " pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:41.327982 kubelet[3108]: I0712 00:07:41.327561 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/058503a3-83aa-47a4-b834-2e39d5989b2c-registration-dir\") pod \"csi-node-driver-9q7pd\" (UID: \"058503a3-83aa-47a4-b834-2e39d5989b2c\") " pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:41.327982 kubelet[3108]: I0712 00:07:41.327576 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/058503a3-83aa-47a4-b834-2e39d5989b2c-socket-dir\") pod \"csi-node-driver-9q7pd\" (UID: \"058503a3-83aa-47a4-b834-2e39d5989b2c\") " pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:41.327982 kubelet[3108]: I0712 00:07:41.327603 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6fp9\" (UniqueName: \"kubernetes.io/projected/058503a3-83aa-47a4-b834-2e39d5989b2c-kube-api-access-n6fp9\") pod \"csi-node-driver-9q7pd\" (UID: \"058503a3-83aa-47a4-b834-2e39d5989b2c\") " pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:41.333952 kubelet[3108]: E0712 00:07:41.333923 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.333952 kubelet[3108]: W0712 00:07:41.333944 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.334086 kubelet[3108]: E0712 00:07:41.333969 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.345765 containerd[1693]: time="2025-07-12T00:07:41.345335873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658b78d7b9-r9f62,Uid:c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:41.357036 kubelet[3108]: E0712 00:07:41.356931 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.357036 kubelet[3108]: W0712 00:07:41.356965 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.357036 kubelet[3108]: E0712 00:07:41.356987 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.425908 containerd[1693]: time="2025-07-12T00:07:41.425349273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:41.425908 containerd[1693]: time="2025-07-12T00:07:41.425542395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:41.425908 containerd[1693]: time="2025-07-12T00:07:41.425576075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:41.425908 containerd[1693]: time="2025-07-12T00:07:41.425696636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:41.428300 kubelet[3108]: E0712 00:07:41.428233 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.428300 kubelet[3108]: W0712 00:07:41.428255 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.428300 kubelet[3108]: E0712 00:07:41.428299 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.428770 kubelet[3108]: E0712 00:07:41.428575 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.428770 kubelet[3108]: W0712 00:07:41.428584 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.429008 kubelet[3108]: E0712 00:07:41.428908 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429151 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430709 kubelet[3108]: W0712 00:07:41.429160 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429180 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429417 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430709 kubelet[3108]: W0712 00:07:41.429427 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429440 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429674 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430709 kubelet[3108]: W0712 00:07:41.429708 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429724 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430709 kubelet[3108]: E0712 00:07:41.429906 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430982 kubelet[3108]: W0712 00:07:41.429914 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.429947 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.430190 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430982 kubelet[3108]: W0712 00:07:41.430201 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.430342 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.430583 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430982 kubelet[3108]: W0712 00:07:41.430593 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.430611 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.430982 kubelet[3108]: E0712 00:07:41.430819 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.430982 kubelet[3108]: W0712 00:07:41.430829 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.431247 kubelet[3108]: E0712 00:07:41.430886 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.431247 kubelet[3108]: E0712 00:07:41.431084 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.431247 kubelet[3108]: W0712 00:07:41.431097 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.431247 kubelet[3108]: E0712 00:07:41.431190 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.431624 kubelet[3108]: E0712 00:07:41.431380 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.431624 kubelet[3108]: W0712 00:07:41.431395 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.431624 kubelet[3108]: E0712 00:07:41.431479 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.431979 kubelet[3108]: E0712 00:07:41.431675 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.431979 kubelet[3108]: W0712 00:07:41.431684 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.431979 kubelet[3108]: E0712 00:07:41.431700 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.432332 kubelet[3108]: E0712 00:07:41.432146 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.432332 kubelet[3108]: W0712 00:07:41.432164 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.432332 kubelet[3108]: E0712 00:07:41.432182 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.432648 kubelet[3108]: E0712 00:07:41.432393 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.432648 kubelet[3108]: W0712 00:07:41.432401 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.432648 kubelet[3108]: E0712 00:07:41.432444 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.432971 kubelet[3108]: E0712 00:07:41.432889 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.432971 kubelet[3108]: W0712 00:07:41.432909 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.433060 kubelet[3108]: E0712 00:07:41.432992 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.433185 kubelet[3108]: E0712 00:07:41.433081 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.433185 kubelet[3108]: W0712 00:07:41.433094 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434004 kubelet[3108]: E0712 00:07:41.433381 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.434004 kubelet[3108]: E0712 00:07:41.433507 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.434004 kubelet[3108]: W0712 00:07:41.433516 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434004 kubelet[3108]: E0712 00:07:41.433577 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.434004 kubelet[3108]: E0712 00:07:41.433658 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.434004 kubelet[3108]: W0712 00:07:41.433664 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434004 kubelet[3108]: E0712 00:07:41.433674 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434366 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.434860 kubelet[3108]: W0712 00:07:41.434384 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434402 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434600 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.434860 kubelet[3108]: W0712 00:07:41.434608 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434618 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434747 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.434860 kubelet[3108]: W0712 00:07:41.434754 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.434860 kubelet[3108]: E0712 00:07:41.434761 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.435109 kubelet[3108]: E0712 00:07:41.434920 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.435109 kubelet[3108]: W0712 00:07:41.434930 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.435109 kubelet[3108]: E0712 00:07:41.434939 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.435109 kubelet[3108]: E0712 00:07:41.435086 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.435109 kubelet[3108]: W0712 00:07:41.435093 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.435109 kubelet[3108]: E0712 00:07:41.435101 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.436011 kubelet[3108]: E0712 00:07:41.435803 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.436011 kubelet[3108]: W0712 00:07:41.435823 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.436011 kubelet[3108]: E0712 00:07:41.435838 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.437729 kubelet[3108]: E0712 00:07:41.437498 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.437729 kubelet[3108]: W0712 00:07:41.437516 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.437729 kubelet[3108]: E0712 00:07:41.437530 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.450509 kubelet[3108]: E0712 00:07:41.449806 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:41.450509 kubelet[3108]: W0712 00:07:41.449835 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:41.450509 kubelet[3108]: E0712 00:07:41.449857 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:41.463046 systemd[1]: Started cri-containerd-ebbdfa66c4e918e6b621ad77a11f497237215b4c3fdc800e25e492a04e7543a2.scope - libcontainer container ebbdfa66c4e918e6b621ad77a11f497237215b4c3fdc800e25e492a04e7543a2. Jul 12 00:07:41.474940 containerd[1693]: time="2025-07-12T00:07:41.474893244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rs5t6,Uid:5ee6e301-a790-423f-8822-0c62ca556e29,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:41.518333 containerd[1693]: time="2025-07-12T00:07:41.518184049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658b78d7b9-r9f62,Uid:c3d6c5cd-2771-49f5-9a5d-1ac15afe0b50,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebbdfa66c4e918e6b621ad77a11f497237215b4c3fdc800e25e492a04e7543a2\"" Jul 12 00:07:41.521694 containerd[1693]: time="2025-07-12T00:07:41.520319545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:07:41.536160 containerd[1693]: time="2025-07-12T00:07:41.535736740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:41.536160 containerd[1693]: time="2025-07-12T00:07:41.536117703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:41.536437 containerd[1693]: time="2025-07-12T00:07:41.536130023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:41.536645 containerd[1693]: time="2025-07-12T00:07:41.536565507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:41.553419 systemd[1]: Started cri-containerd-07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf.scope - libcontainer container 07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf. Jul 12 00:07:41.583122 containerd[1693]: time="2025-07-12T00:07:41.583041615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rs5t6,Uid:5ee6e301-a790-423f-8822-0c62ca556e29,Namespace:calico-system,Attempt:0,} returns sandbox id \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\"" Jul 12 00:07:42.858159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707283339.mount: Deactivated successfully. Jul 12 00:07:43.306958 kubelet[3108]: E0712 00:07:43.305881 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:43.473599 containerd[1693]: time="2025-07-12T00:07:43.473547385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.482626 containerd[1693]: time="2025-07-12T00:07:43.482286891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:07:43.492395 containerd[1693]: time="2025-07-12T00:07:43.492324726Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.502668 containerd[1693]: time="2025-07-12T00:07:43.502588883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.503751 containerd[1693]: time="2025-07-12T00:07:43.503143607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.982789702s" Jul 12 00:07:43.503751 containerd[1693]: time="2025-07-12T00:07:43.503182967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:07:43.504987 containerd[1693]: time="2025-07-12T00:07:43.504876260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:07:43.513873 containerd[1693]: time="2025-07-12T00:07:43.513826407Z" level=info msg="CreateContainer within sandbox \"ebbdfa66c4e918e6b621ad77a11f497237215b4c3fdc800e25e492a04e7543a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:07:43.579815 containerd[1693]: time="2025-07-12T00:07:43.579601540Z" level=info msg="CreateContainer within sandbox \"ebbdfa66c4e918e6b621ad77a11f497237215b4c3fdc800e25e492a04e7543a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d8d7ea9a73d0c78d1241692dc36820d5ac880aad7015dc9c2a1dcf5f37c17a7d\"" Jul 12 00:07:43.580740 containerd[1693]: time="2025-07-12T00:07:43.580687548Z" level=info msg="StartContainer for \"d8d7ea9a73d0c78d1241692dc36820d5ac880aad7015dc9c2a1dcf5f37c17a7d\"" Jul 12 00:07:43.610427 systemd[1]: Started cri-containerd-d8d7ea9a73d0c78d1241692dc36820d5ac880aad7015dc9c2a1dcf5f37c17a7d.scope - libcontainer container d8d7ea9a73d0c78d1241692dc36820d5ac880aad7015dc9c2a1dcf5f37c17a7d. Jul 12 00:07:43.654765 containerd[1693]: time="2025-07-12T00:07:43.654126539Z" level=info msg="StartContainer for \"d8d7ea9a73d0c78d1241692dc36820d5ac880aad7015dc9c2a1dcf5f37c17a7d\" returns successfully" Jul 12 00:07:44.422725 kubelet[3108]: I0712 00:07:44.421988 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-658b78d7b9-r9f62" podStartSLOduration=2.437900022 podStartE2EDuration="4.421971814s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:07:41.519851061 +0000 UTC m=+23.323065080" lastFinishedPulling="2025-07-12 00:07:43.503922853 +0000 UTC m=+25.307136872" observedRunningTime="2025-07-12 00:07:44.421621731 +0000 UTC m=+26.224835750" watchObservedRunningTime="2025-07-12 00:07:44.421971814 +0000 UTC m=+26.225185833" Jul 12 00:07:44.442854 kubelet[3108]: E0712 00:07:44.442824 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.443169 kubelet[3108]: W0712 00:07:44.443016 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.443169 kubelet[3108]: E0712 00:07:44.443044 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.443445 kubelet[3108]: E0712 00:07:44.443377 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.443503 kubelet[3108]: W0712 00:07:44.443391 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.443561 kubelet[3108]: E0712 00:07:44.443550 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.443926 kubelet[3108]: E0712 00:07:44.443819 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.443926 kubelet[3108]: W0712 00:07:44.443832 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.443926 kubelet[3108]: E0712 00:07:44.443843 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.444173 kubelet[3108]: E0712 00:07:44.444113 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.444173 kubelet[3108]: W0712 00:07:44.444125 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.444641 kubelet[3108]: E0712 00:07:44.444135 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.444928 kubelet[3108]: E0712 00:07:44.444915 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.445064 kubelet[3108]: W0712 00:07:44.444996 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.445064 kubelet[3108]: E0712 00:07:44.445016 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.445448 kubelet[3108]: E0712 00:07:44.445368 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.445448 kubelet[3108]: W0712 00:07:44.445379 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.445448 kubelet[3108]: E0712 00:07:44.445403 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.445865 kubelet[3108]: E0712 00:07:44.445749 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.445865 kubelet[3108]: W0712 00:07:44.445760 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.445865 kubelet[3108]: E0712 00:07:44.445771 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.446088 kubelet[3108]: E0712 00:07:44.446058 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.446183 kubelet[3108]: W0712 00:07:44.446171 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.446316 kubelet[3108]: E0712 00:07:44.446263 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.446626 kubelet[3108]: E0712 00:07:44.446577 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.446626 kubelet[3108]: W0712 00:07:44.446591 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.446626 kubelet[3108]: E0712 00:07:44.446602 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.447018 kubelet[3108]: E0712 00:07:44.447006 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.447152 kubelet[3108]: W0712 00:07:44.447097 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.447152 kubelet[3108]: E0712 00:07:44.447113 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.447535 kubelet[3108]: E0712 00:07:44.447461 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.447535 kubelet[3108]: W0712 00:07:44.447479 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.447535 kubelet[3108]: E0712 00:07:44.447491 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.447900 kubelet[3108]: E0712 00:07:44.447831 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.447900 kubelet[3108]: W0712 00:07:44.447844 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.447900 kubelet[3108]: E0712 00:07:44.447856 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.448413 kubelet[3108]: E0712 00:07:44.448303 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.448413 kubelet[3108]: W0712 00:07:44.448317 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.448413 kubelet[3108]: E0712 00:07:44.448334 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.448686 kubelet[3108]: E0712 00:07:44.448607 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.448686 kubelet[3108]: W0712 00:07:44.448618 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.448686 kubelet[3108]: E0712 00:07:44.448629 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.449022 kubelet[3108]: E0712 00:07:44.448936 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.449022 kubelet[3108]: W0712 00:07:44.448947 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.449022 kubelet[3108]: E0712 00:07:44.448958 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.457672 kubelet[3108]: E0712 00:07:44.457525 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.457672 kubelet[3108]: W0712 00:07:44.457545 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.457672 kubelet[3108]: E0712 00:07:44.457562 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.458111 kubelet[3108]: E0712 00:07:44.457961 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.458111 kubelet[3108]: W0712 00:07:44.457974 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.458111 kubelet[3108]: E0712 00:07:44.457994 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.458347 kubelet[3108]: E0712 00:07:44.458334 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.458559 kubelet[3108]: W0712 00:07:44.458399 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.458559 kubelet[3108]: E0712 00:07:44.458427 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.458720 kubelet[3108]: E0712 00:07:44.458707 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.458780 kubelet[3108]: W0712 00:07:44.458769 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.458861 kubelet[3108]: E0712 00:07:44.458849 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.459144 kubelet[3108]: E0712 00:07:44.459117 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.459144 kubelet[3108]: W0712 00:07:44.459139 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.459246 kubelet[3108]: E0712 00:07:44.459159 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.459359 kubelet[3108]: E0712 00:07:44.459333 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.459359 kubelet[3108]: W0712 00:07:44.459348 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.459444 kubelet[3108]: E0712 00:07:44.459429 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.459548 kubelet[3108]: E0712 00:07:44.459531 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.459548 kubelet[3108]: W0712 00:07:44.459547 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.459677 kubelet[3108]: E0712 00:07:44.459575 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.459722 kubelet[3108]: E0712 00:07:44.459690 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.459722 kubelet[3108]: W0712 00:07:44.459697 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.459722 kubelet[3108]: E0712 00:07:44.459713 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.459885 kubelet[3108]: E0712 00:07:44.459865 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.459885 kubelet[3108]: W0712 00:07:44.459878 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.459885 kubelet[3108]: E0712 00:07:44.459892 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.460441 kubelet[3108]: E0712 00:07:44.460060 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.460441 kubelet[3108]: W0712 00:07:44.460068 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.460441 kubelet[3108]: E0712 00:07:44.460076 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.460938 kubelet[3108]: E0712 00:07:44.460755 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.460938 kubelet[3108]: W0712 00:07:44.460787 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.460938 kubelet[3108]: E0712 00:07:44.460811 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.461440 kubelet[3108]: E0712 00:07:44.461334 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.461440 kubelet[3108]: W0712 00:07:44.461352 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.461440 kubelet[3108]: E0712 00:07:44.461387 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.461818 kubelet[3108]: E0712 00:07:44.461713 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.461818 kubelet[3108]: W0712 00:07:44.461725 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.461818 kubelet[3108]: E0712 00:07:44.461765 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.462344 kubelet[3108]: E0712 00:07:44.462078 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.462344 kubelet[3108]: W0712 00:07:44.462091 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.462344 kubelet[3108]: E0712 00:07:44.462146 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.462801 kubelet[3108]: E0712 00:07:44.462558 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.462801 kubelet[3108]: W0712 00:07:44.462571 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.462801 kubelet[3108]: E0712 00:07:44.462591 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.463010 kubelet[3108]: E0712 00:07:44.462992 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.463080 kubelet[3108]: W0712 00:07:44.463068 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.463271 kubelet[3108]: E0712 00:07:44.463129 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.463530 kubelet[3108]: E0712 00:07:44.463513 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.463851 kubelet[3108]: W0712 00:07:44.463605 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.463851 kubelet[3108]: E0712 00:07:44.463623 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.464012 kubelet[3108]: E0712 00:07:44.463999 3108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.464079 kubelet[3108]: W0712 00:07:44.464067 3108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.464133 kubelet[3108]: E0712 00:07:44.464123 3108 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.877405 containerd[1693]: time="2025-07-12T00:07:44.877237791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.885896 containerd[1693]: time="2025-07-12T00:07:44.885723295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:07:44.890663 containerd[1693]: time="2025-07-12T00:07:44.890406651Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.898385 containerd[1693]: time="2025-07-12T00:07:44.898350232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.899025 containerd[1693]: time="2025-07-12T00:07:44.898979076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.393729573s" Jul 12 00:07:44.899025 containerd[1693]: time="2025-07-12T00:07:44.899014117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:07:44.903569 containerd[1693]: time="2025-07-12T00:07:44.903537631Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:07:44.942690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226968703.mount: Deactivated successfully. Jul 12 00:07:44.964603 containerd[1693]: time="2025-07-12T00:07:44.964555217Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c\"" Jul 12 00:07:44.965331 containerd[1693]: time="2025-07-12T00:07:44.965304183Z" level=info msg="StartContainer for \"48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c\"" Jul 12 00:07:44.993437 systemd[1]: run-containerd-runc-k8s.io-48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c-runc.L3Ewfc.mount: Deactivated successfully. Jul 12 00:07:45.001403 systemd[1]: Started cri-containerd-48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c.scope - libcontainer container 48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c. Jul 12 00:07:45.046863 containerd[1693]: time="2025-07-12T00:07:45.046444682Z" level=info msg="StartContainer for \"48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c\" returns successfully" Jul 12 00:07:45.060672 systemd[1]: cri-containerd-48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c.scope: Deactivated successfully. Jul 12 00:07:45.305637 kubelet[3108]: E0712 00:07:45.305327 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:45.407507 kubelet[3108]: I0712 00:07:45.406886 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:07:45.939673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c-rootfs.mount: Deactivated successfully. Jul 12 00:07:46.070774 containerd[1693]: time="2025-07-12T00:07:46.070595618Z" level=info msg="shim disconnected" id=48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c namespace=k8s.io Jul 12 00:07:46.070774 containerd[1693]: time="2025-07-12T00:07:46.070662618Z" level=warning msg="cleaning up after shim disconnected" id=48c2269b67de7cbf270683bbaf8e8af1683eb44f099ed670d6606779cfa5d26c namespace=k8s.io Jul 12 00:07:46.070774 containerd[1693]: time="2025-07-12T00:07:46.070671418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:07:46.411740 containerd[1693]: time="2025-07-12T00:07:46.411478379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:07:47.305590 kubelet[3108]: E0712 00:07:47.305478 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:49.306332 kubelet[3108]: E0712 00:07:49.305984 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:49.824565 containerd[1693]: time="2025-07-12T00:07:49.824508066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.829060 containerd[1693]: time="2025-07-12T00:07:49.828883699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:07:49.834554 containerd[1693]: time="2025-07-12T00:07:49.834516902Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.840359 containerd[1693]: time="2025-07-12T00:07:49.840286066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.841156 containerd[1693]: time="2025-07-12T00:07:49.841033592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.429400932s" Jul 12 00:07:49.841156 containerd[1693]: time="2025-07-12T00:07:49.841071952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:07:49.845890 containerd[1693]: time="2025-07-12T00:07:49.845858349Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:07:49.907393 containerd[1693]: time="2025-07-12T00:07:49.907345178Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2\"" Jul 12 00:07:49.908577 containerd[1693]: time="2025-07-12T00:07:49.908263065Z" level=info msg="StartContainer for \"0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2\"" Jul 12 00:07:49.945412 systemd[1]: Started cri-containerd-0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2.scope - libcontainer container 0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2. Jul 12 00:07:49.978339 containerd[1693]: time="2025-07-12T00:07:49.978274599Z" level=info msg="StartContainer for \"0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2\" returns successfully" Jul 12 00:07:51.168349 containerd[1693]: time="2025-07-12T00:07:51.168288161Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:51.170854 systemd[1]: cri-containerd-0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2.scope: Deactivated successfully. Jul 12 00:07:51.195885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2-rootfs.mount: Deactivated successfully. Jul 12 00:07:51.227249 kubelet[3108]: I0712 00:07:51.226935 3108 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:07:51.603893 kubelet[3108]: W0712 00:07:51.291194 3108 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4081.3.4-n-ddca76aad7" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object Jul 12 00:07:51.603893 kubelet[3108]: E0712 00:07:51.291259 3108 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4081.3.4-n-ddca76aad7\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object" logger="UnhandledError" Jul 12 00:07:51.603893 kubelet[3108]: W0712 00:07:51.292441 3108 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081.3.4-n-ddca76aad7" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object Jul 12 00:07:51.603893 kubelet[3108]: E0712 00:07:51.292467 3108 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081.3.4-n-ddca76aad7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object" logger="UnhandledError" Jul 12 00:07:51.603893 kubelet[3108]: W0712 00:07:51.292500 3108 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4081.3.4-n-ddca76aad7" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object Jul 12 00:07:51.276314 systemd[1]: Created slice kubepods-burstable-podaf5f3704_be95_48d7_b031_1bcc62a0d210.slice - libcontainer container kubepods-burstable-podaf5f3704_be95_48d7_b031_1bcc62a0d210.slice. Jul 12 00:07:51.604265 kubelet[3108]: E0712 00:07:51.292512 3108 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.4-n-ddca76aad7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.4-n-ddca76aad7' and this object" logger="UnhandledError" Jul 12 00:07:51.604265 kubelet[3108]: I0712 00:07:51.307448 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-config\") pod \"goldmane-768f4c5c69-8v7xh\" (UID: \"44b464a5-2b46-4c57-9fe7-32dfead6264d\") " pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:51.604265 kubelet[3108]: I0712 00:07:51.307484 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d45aa45-e8a2-4c24-b1b6-5285c8ed5896-config-volume\") pod \"coredns-668d6bf9bc-nbw2t\" (UID: \"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896\") " pod="kube-system/coredns-668d6bf9bc-nbw2t" Jul 12 00:07:51.604265 kubelet[3108]: I0712 00:07:51.307504 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e11846b7-c405-4d2c-8bcf-444534f1feee-tigera-ca-bundle\") pod \"calico-kube-controllers-6b8d4c7b8b-5jx6w\" (UID: \"e11846b7-c405-4d2c-8bcf-444534f1feee\") " pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" Jul 12 00:07:51.604265 kubelet[3108]: I0712 00:07:51.307525 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8v7xh\" (UID: \"44b464a5-2b46-4c57-9fe7-32dfead6264d\") " pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:51.293716 systemd[1]: Created slice kubepods-besteffort-pod44b464a5_2b46_4c57_9fe7_32dfead6264d.slice - libcontainer container kubepods-besteffort-pod44b464a5_2b46_4c57_9fe7_32dfead6264d.slice. Jul 12 00:07:51.604471 kubelet[3108]: I0712 00:07:51.307544 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxgl\" (UniqueName: \"kubernetes.io/projected/e11846b7-c405-4d2c-8bcf-444534f1feee-kube-api-access-blxgl\") pod \"calico-kube-controllers-6b8d4c7b8b-5jx6w\" (UID: \"e11846b7-c405-4d2c-8bcf-444534f1feee\") " pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" Jul 12 00:07:51.604471 kubelet[3108]: I0712 00:07:51.307563 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5f3704-be95-48d7-b031-1bcc62a0d210-config-volume\") pod \"coredns-668d6bf9bc-r7rxs\" (UID: \"af5f3704-be95-48d7-b031-1bcc62a0d210\") " pod="kube-system/coredns-668d6bf9bc-r7rxs" Jul 12 00:07:51.604471 kubelet[3108]: I0712 00:07:51.307578 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6g2m\" (UniqueName: \"kubernetes.io/projected/af5f3704-be95-48d7-b031-1bcc62a0d210-kube-api-access-m6g2m\") pod \"coredns-668d6bf9bc-r7rxs\" (UID: \"af5f3704-be95-48d7-b031-1bcc62a0d210\") " pod="kube-system/coredns-668d6bf9bc-r7rxs" Jul 12 00:07:51.604471 kubelet[3108]: I0712 00:07:51.307597 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z52\" (UniqueName: \"kubernetes.io/projected/8d45aa45-e8a2-4c24-b1b6-5285c8ed5896-kube-api-access-h6z52\") pod \"coredns-668d6bf9bc-nbw2t\" (UID: \"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896\") " pod="kube-system/coredns-668d6bf9bc-nbw2t" Jul 12 00:07:51.604471 kubelet[3108]: I0712 00:07:51.307614 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkmf\" (UniqueName: \"kubernetes.io/projected/44b464a5-2b46-4c57-9fe7-32dfead6264d-kube-api-access-pbkmf\") pod \"goldmane-768f4c5c69-8v7xh\" (UID: \"44b464a5-2b46-4c57-9fe7-32dfead6264d\") " pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:51.307245 systemd[1]: Created slice kubepods-burstable-pod8d45aa45_e8a2_4c24_b1b6_5285c8ed5896.slice - libcontainer container kubepods-burstable-pod8d45aa45_e8a2_4c24_b1b6_5285c8ed5896.slice. Jul 12 00:07:51.604694 kubelet[3108]: I0712 00:07:51.307631 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8v7xh\" (UID: \"44b464a5-2b46-4c57-9fe7-32dfead6264d\") " pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:51.604694 kubelet[3108]: I0712 00:07:51.408666 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-ca-bundle\") pod \"whisker-6fd7dd5445-ztqj8\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " pod="calico-system/whisker-6fd7dd5445-ztqj8" Jul 12 00:07:51.604694 kubelet[3108]: I0712 00:07:51.408706 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96xf\" (UniqueName: \"kubernetes.io/projected/05d2799a-07b3-4f97-85d6-84ce3dde480c-kube-api-access-h96xf\") pod \"whisker-6fd7dd5445-ztqj8\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " pod="calico-system/whisker-6fd7dd5445-ztqj8" Jul 12 00:07:51.604694 kubelet[3108]: I0712 00:07:51.408772 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rtm\" (UniqueName: \"kubernetes.io/projected/5154b8c8-3461-45dc-b227-a58fdc2acc43-kube-api-access-v4rtm\") pod \"calico-apiserver-7d46bcb676-fkjjh\" (UID: \"5154b8c8-3461-45dc-b227-a58fdc2acc43\") " pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" Jul 12 00:07:51.604694 kubelet[3108]: I0712 00:07:51.408789 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhch7\" (UniqueName: \"kubernetes.io/projected/0f4d993b-f232-4951-84c0-c4c4ed832470-kube-api-access-bhch7\") pod \"calico-apiserver-7d46bcb676-g8p92\" (UID: \"0f4d993b-f232-4951-84c0-c4c4ed832470\") " pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" Jul 12 00:07:51.315762 systemd[1]: Created slice kubepods-besteffort-pode11846b7_c405_4d2c_8bcf_444534f1feee.slice - libcontainer container kubepods-besteffort-pode11846b7_c405_4d2c_8bcf_444534f1feee.slice. Jul 12 00:07:51.604900 kubelet[3108]: I0712 00:07:51.408819 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5154b8c8-3461-45dc-b227-a58fdc2acc43-calico-apiserver-certs\") pod \"calico-apiserver-7d46bcb676-fkjjh\" (UID: \"5154b8c8-3461-45dc-b227-a58fdc2acc43\") " pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" Jul 12 00:07:51.604900 kubelet[3108]: I0712 00:07:51.408845 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-backend-key-pair\") pod \"whisker-6fd7dd5445-ztqj8\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " pod="calico-system/whisker-6fd7dd5445-ztqj8" Jul 12 00:07:51.604900 kubelet[3108]: I0712 00:07:51.408882 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f4d993b-f232-4951-84c0-c4c4ed832470-calico-apiserver-certs\") pod \"calico-apiserver-7d46bcb676-g8p92\" (UID: \"0f4d993b-f232-4951-84c0-c4c4ed832470\") " pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" Jul 12 00:07:51.324168 systemd[1]: Created slice kubepods-besteffort-pod5154b8c8_3461_45dc_b227_a58fdc2acc43.slice - libcontainer container kubepods-besteffort-pod5154b8c8_3461_45dc_b227_a58fdc2acc43.slice. Jul 12 00:07:51.333629 systemd[1]: Created slice kubepods-besteffort-pod0f4d993b_f232_4951_84c0_c4c4ed832470.slice - libcontainer container kubepods-besteffort-pod0f4d993b_f232_4951_84c0_c4c4ed832470.slice. Jul 12 00:07:51.342139 systemd[1]: Created slice kubepods-besteffort-pod05d2799a_07b3_4f97_85d6_84ce3dde480c.slice - libcontainer container kubepods-besteffort-pod05d2799a_07b3_4f97_85d6_84ce3dde480c.slice. Jul 12 00:07:51.351542 systemd[1]: Created slice kubepods-besteffort-pod058503a3_83aa_47a4_b834_2e39d5989b2c.slice - libcontainer container kubepods-besteffort-pod058503a3_83aa_47a4_b834_2e39d5989b2c.slice. Jul 12 00:07:51.611254 containerd[1693]: time="2025-07-12T00:07:51.608958564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q7pd,Uid:058503a3-83aa-47a4-b834-2e39d5989b2c,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:51.906922 containerd[1693]: time="2025-07-12T00:07:51.906812397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r7rxs,Uid:af5f3704-be95-48d7-b031-1bcc62a0d210,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:51.914583 containerd[1693]: time="2025-07-12T00:07:51.913273806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8d4c7b8b-5jx6w,Uid:e11846b7-c405-4d2c-8bcf-444534f1feee,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:51.917160 containerd[1693]: time="2025-07-12T00:07:51.916785913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-fkjjh,Uid:5154b8c8-3461-45dc-b227-a58fdc2acc43,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:07:51.917160 containerd[1693]: time="2025-07-12T00:07:51.917006794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd7dd5445-ztqj8,Uid:05d2799a-07b3-4f97-85d6-84ce3dde480c,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:51.925314 containerd[1693]: time="2025-07-12T00:07:51.925278978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-g8p92,Uid:0f4d993b-f232-4951-84c0-c4c4ed832470,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:07:51.930515 containerd[1693]: time="2025-07-12T00:07:51.930299576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nbw2t,Uid:8d45aa45-e8a2-4c24-b1b6-5285c8ed5896,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:52.061259 containerd[1693]: time="2025-07-12T00:07:52.061179455Z" level=info msg="shim disconnected" id=0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2 namespace=k8s.io Jul 12 00:07:52.061259 containerd[1693]: time="2025-07-12T00:07:52.061250895Z" level=warning msg="cleaning up after shim disconnected" id=0c213b0ea1c7e654a7e64e6bc320643f9f5f030a1085c9cbb2eac9cbb1119eb2 namespace=k8s.io Jul 12 00:07:52.061259 containerd[1693]: time="2025-07-12T00:07:52.061260375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:07:52.369882 containerd[1693]: time="2025-07-12T00:07:52.369826610Z" level=error msg="Failed to destroy network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.371235 containerd[1693]: time="2025-07-12T00:07:52.370567176Z" level=error msg="encountered an error cleaning up failed sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.371235 containerd[1693]: time="2025-07-12T00:07:52.370637456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q7pd,Uid:058503a3-83aa-47a4-b834-2e39d5989b2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.371379 kubelet[3108]: E0712 00:07:52.370864 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.371379 kubelet[3108]: E0712 00:07:52.370943 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:52.371379 kubelet[3108]: E0712 00:07:52.370963 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9q7pd" Jul 12 00:07:52.371937 kubelet[3108]: E0712 00:07:52.371003 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9q7pd_calico-system(058503a3-83aa-47a4-b834-2e39d5989b2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9q7pd_calico-system(058503a3-83aa-47a4-b834-2e39d5989b2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:52.409766 kubelet[3108]: E0712 00:07:52.409445 3108 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:07:52.409766 kubelet[3108]: E0712 00:07:52.409545 3108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-config podName:44b464a5-2b46-4c57-9fe7-32dfead6264d nodeName:}" failed. No retries permitted until 2025-07-12 00:07:52.909523753 +0000 UTC m=+34.712737772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-config") pod "goldmane-768f4c5c69-8v7xh" (UID: "44b464a5-2b46-4c57-9fe7-32dfead6264d") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:07:52.411068 kubelet[3108]: E0712 00:07:52.410858 3108 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 12 00:07:52.411068 kubelet[3108]: E0712 00:07:52.410916 3108 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:07:52.411068 kubelet[3108]: E0712 00:07:52.410961 3108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-ca-bundle podName:44b464a5-2b46-4c57-9fe7-32dfead6264d nodeName:}" failed. No retries permitted until 2025-07-12 00:07:52.910945724 +0000 UTC m=+34.714159743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-ca-bundle") pod "goldmane-768f4c5c69-8v7xh" (UID: "44b464a5-2b46-4c57-9fe7-32dfead6264d") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:07:52.411515 kubelet[3108]: E0712 00:07:52.411330 3108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-key-pair podName:44b464a5-2b46-4c57-9fe7-32dfead6264d nodeName:}" failed. No retries permitted until 2025-07-12 00:07:52.911315167 +0000 UTC m=+34.714529186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/44b464a5-2b46-4c57-9fe7-32dfead6264d-goldmane-key-pair") pod "goldmane-768f4c5c69-8v7xh" (UID: "44b464a5-2b46-4c57-9fe7-32dfead6264d") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:07:52.433420 kubelet[3108]: I0712 00:07:52.433096 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:07:52.437779 containerd[1693]: time="2025-07-12T00:07:52.437740048Z" level=info msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" Jul 12 00:07:52.438250 containerd[1693]: time="2025-07-12T00:07:52.438084811Z" level=info msg="Ensure that sandbox 0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291 in task-service has been cleanup successfully" Jul 12 00:07:52.451459 containerd[1693]: time="2025-07-12T00:07:52.451410713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:07:52.475770 containerd[1693]: time="2025-07-12T00:07:52.475715138Z" level=error msg="Failed to destroy network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.476854 containerd[1693]: time="2025-07-12T00:07:52.476744746Z" level=error msg="encountered an error cleaning up failed sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.477357 containerd[1693]: time="2025-07-12T00:07:52.476987748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r7rxs,Uid:af5f3704-be95-48d7-b031-1bcc62a0d210,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.478181 kubelet[3108]: E0712 00:07:52.477540 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.478181 kubelet[3108]: E0712 00:07:52.477590 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r7rxs" Jul 12 00:07:52.478181 kubelet[3108]: E0712 00:07:52.477609 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r7rxs" Jul 12 00:07:52.478376 kubelet[3108]: E0712 00:07:52.477644 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r7rxs_kube-system(af5f3704-be95-48d7-b031-1bcc62a0d210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r7rxs_kube-system(af5f3704-be95-48d7-b031-1bcc62a0d210)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r7rxs" podUID="af5f3704-be95-48d7-b031-1bcc62a0d210" Jul 12 00:07:52.551529 containerd[1693]: time="2025-07-12T00:07:52.551417516Z" level=error msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" failed" error="failed to destroy network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.552237 kubelet[3108]: E0712 00:07:52.552027 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:07:52.552237 kubelet[3108]: E0712 00:07:52.552149 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291"} Jul 12 00:07:52.552556 kubelet[3108]: E0712 00:07:52.552201 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"058503a3-83aa-47a4-b834-2e39d5989b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:52.552556 kubelet[3108]: E0712 00:07:52.552333 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"058503a3-83aa-47a4-b834-2e39d5989b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9q7pd" podUID="058503a3-83aa-47a4-b834-2e39d5989b2c" Jul 12 00:07:52.561604 containerd[1693]: time="2025-07-12T00:07:52.561437872Z" level=error msg="Failed to destroy network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.562081 containerd[1693]: time="2025-07-12T00:07:52.561985397Z" level=error msg="encountered an error cleaning up failed sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.562081 containerd[1693]: time="2025-07-12T00:07:52.562039637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8d4c7b8b-5jx6w,Uid:e11846b7-c405-4d2c-8bcf-444534f1feee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.563850 kubelet[3108]: E0712 00:07:52.563436 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.563850 kubelet[3108]: E0712 00:07:52.563506 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" Jul 12 00:07:52.563850 kubelet[3108]: E0712 00:07:52.563536 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" Jul 12 00:07:52.564036 kubelet[3108]: E0712 00:07:52.563575 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b8d4c7b8b-5jx6w_calico-system(e11846b7-c405-4d2c-8bcf-444534f1feee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b8d4c7b8b-5jx6w_calico-system(e11846b7-c405-4d2c-8bcf-444534f1feee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" podUID="e11846b7-c405-4d2c-8bcf-444534f1feee" Jul 12 00:07:52.577508 containerd[1693]: time="2025-07-12T00:07:52.577451746Z" level=error msg="Failed to destroy network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.577852 containerd[1693]: time="2025-07-12T00:07:52.577804869Z" level=error msg="encountered an error cleaning up failed sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.577907 containerd[1693]: time="2025-07-12T00:07:52.577873189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-fkjjh,Uid:5154b8c8-3461-45dc-b227-a58fdc2acc43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.578517 kubelet[3108]: E0712 00:07:52.578104 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.578517 kubelet[3108]: E0712 00:07:52.578168 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" Jul 12 00:07:52.578517 kubelet[3108]: E0712 00:07:52.578191 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" Jul 12 00:07:52.578698 kubelet[3108]: E0712 00:07:52.578289 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d46bcb676-fkjjh_calico-apiserver(5154b8c8-3461-45dc-b227-a58fdc2acc43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d46bcb676-fkjjh_calico-apiserver(5154b8c8-3461-45dc-b227-a58fdc2acc43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" podUID="5154b8c8-3461-45dc-b227-a58fdc2acc43" Jul 12 00:07:52.591869 containerd[1693]: time="2025-07-12T00:07:52.591816964Z" level=error msg="Failed to destroy network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.592352 containerd[1693]: time="2025-07-12T00:07:52.592314768Z" level=error msg="encountered an error cleaning up failed sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.592510 containerd[1693]: time="2025-07-12T00:07:52.592448689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nbw2t,Uid:8d45aa45-e8a2-4c24-b1b6-5285c8ed5896,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.593066 kubelet[3108]: E0712 00:07:52.592734 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.593066 kubelet[3108]: E0712 00:07:52.592786 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nbw2t" Jul 12 00:07:52.593066 kubelet[3108]: E0712 00:07:52.592806 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nbw2t" Jul 12 00:07:52.593271 kubelet[3108]: E0712 00:07:52.592844 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nbw2t_kube-system(8d45aa45-e8a2-4c24-b1b6-5285c8ed5896)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nbw2t_kube-system(8d45aa45-e8a2-4c24-b1b6-5285c8ed5896)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nbw2t" podUID="8d45aa45-e8a2-4c24-b1b6-5285c8ed5896" Jul 12 00:07:52.593719 containerd[1693]: time="2025-07-12T00:07:52.593675297Z" level=error msg="Failed to destroy network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.594017 containerd[1693]: time="2025-07-12T00:07:52.593982339Z" level=error msg="encountered an error cleaning up failed sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.594066 containerd[1693]: time="2025-07-12T00:07:52.594045580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd7dd5445-ztqj8,Uid:05d2799a-07b3-4f97-85d6-84ce3dde480c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.594421 kubelet[3108]: E0712 00:07:52.594356 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.594602 kubelet[3108]: E0712 00:07:52.594510 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd7dd5445-ztqj8" Jul 12 00:07:52.594602 kubelet[3108]: E0712 00:07:52.594539 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd7dd5445-ztqj8" Jul 12 00:07:52.594878 kubelet[3108]: E0712 00:07:52.594747 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fd7dd5445-ztqj8_calico-system(05d2799a-07b3-4f97-85d6-84ce3dde480c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fd7dd5445-ztqj8_calico-system(05d2799a-07b3-4f97-85d6-84ce3dde480c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fd7dd5445-ztqj8" podUID="05d2799a-07b3-4f97-85d6-84ce3dde480c" Jul 12 00:07:52.596877 containerd[1693]: time="2025-07-12T00:07:52.596835599Z" level=error msg="Failed to destroy network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.597200 containerd[1693]: time="2025-07-12T00:07:52.597169121Z" level=error msg="encountered an error cleaning up failed sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.597279 containerd[1693]: time="2025-07-12T00:07:52.597235961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-g8p92,Uid:0f4d993b-f232-4951-84c0-c4c4ed832470,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.597534 kubelet[3108]: E0712 00:07:52.597493 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.597598 kubelet[3108]: E0712 00:07:52.597552 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" Jul 12 00:07:52.597598 kubelet[3108]: E0712 00:07:52.597570 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" Jul 12 00:07:52.597706 kubelet[3108]: E0712 00:07:52.597664 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d46bcb676-g8p92_calico-apiserver(0f4d993b-f232-4951-84c0-c4c4ed832470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d46bcb676-g8p92_calico-apiserver(0f4d993b-f232-4951-84c0-c4c4ed832470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" podUID="0f4d993b-f232-4951-84c0-c4c4ed832470" Jul 12 00:07:53.112894 containerd[1693]: time="2025-07-12T00:07:53.112843162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8v7xh,Uid:44b464a5-2b46-4c57-9fe7-32dfead6264d,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:53.236061 containerd[1693]: time="2025-07-12T00:07:53.235882242Z" level=error msg="Failed to destroy network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.236660 containerd[1693]: time="2025-07-12T00:07:53.236512886Z" level=error msg="encountered an error cleaning up failed sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.236660 containerd[1693]: time="2025-07-12T00:07:53.236563646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8v7xh,Uid:44b464a5-2b46-4c57-9fe7-32dfead6264d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.237392 kubelet[3108]: E0712 00:07:53.236906 3108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.237392 kubelet[3108]: E0712 00:07:53.236966 3108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:53.237392 kubelet[3108]: E0712 00:07:53.236986 3108 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8v7xh" Jul 12 00:07:53.237531 kubelet[3108]: E0712 00:07:53.237024 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8v7xh_calico-system(44b464a5-2b46-4c57-9fe7-32dfead6264d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8v7xh_calico-system(44b464a5-2b46-4c57-9fe7-32dfead6264d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8v7xh" podUID="44b464a5-2b46-4c57-9fe7-32dfead6264d" Jul 12 00:07:53.254554 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284-shm.mount: Deactivated successfully. Jul 12 00:07:53.254642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302-shm.mount: Deactivated successfully. Jul 12 00:07:53.254710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291-shm.mount: Deactivated successfully. Jul 12 00:07:53.448134 kubelet[3108]: I0712 00:07:53.448002 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:07:53.451098 containerd[1693]: time="2025-07-12T00:07:53.449957063Z" level=info msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" Jul 12 00:07:53.451098 containerd[1693]: time="2025-07-12T00:07:53.450132224Z" level=info msg="Ensure that sandbox cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294 in task-service has been cleanup successfully" Jul 12 00:07:53.451416 kubelet[3108]: I0712 00:07:53.450146 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:07:53.453970 containerd[1693]: time="2025-07-12T00:07:53.453631128Z" level=info msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" Jul 12 00:07:53.453970 containerd[1693]: time="2025-07-12T00:07:53.453791689Z" level=info msg="Ensure that sandbox a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284 in task-service has been cleanup successfully" Jul 12 00:07:53.454960 kubelet[3108]: I0712 00:07:53.454669 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:07:53.455325 containerd[1693]: time="2025-07-12T00:07:53.455293340Z" level=info msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" Jul 12 00:07:53.455685 containerd[1693]: time="2025-07-12T00:07:53.455664422Z" level=info msg="Ensure that sandbox 9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725 in task-service has been cleanup successfully" Jul 12 00:07:53.458993 kubelet[3108]: I0712 00:07:53.458968 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:07:53.459441 containerd[1693]: time="2025-07-12T00:07:53.459406368Z" level=info msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" Jul 12 00:07:53.459823 containerd[1693]: time="2025-07-12T00:07:53.459660129Z" level=info msg="Ensure that sandbox 4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302 in task-service has been cleanup successfully" Jul 12 00:07:53.463497 kubelet[3108]: I0712 00:07:53.463463 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:07:53.464376 containerd[1693]: time="2025-07-12T00:07:53.463957559Z" level=info msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" Jul 12 00:07:53.465554 containerd[1693]: time="2025-07-12T00:07:53.465526089Z" level=info msg="Ensure that sandbox a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b in task-service has been cleanup successfully" Jul 12 00:07:53.472268 kubelet[3108]: I0712 00:07:53.472248 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:53.473802 containerd[1693]: time="2025-07-12T00:07:53.473497104Z" level=info msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" Jul 12 00:07:53.473802 containerd[1693]: time="2025-07-12T00:07:53.473714465Z" level=info msg="Ensure that sandbox 316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b in task-service has been cleanup successfully" Jul 12 00:07:53.479193 kubelet[3108]: I0712 00:07:53.479134 3108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:07:53.480978 containerd[1693]: time="2025-07-12T00:07:53.480890074Z" level=info msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" Jul 12 00:07:53.481632 containerd[1693]: time="2025-07-12T00:07:53.481604199Z" level=info msg="Ensure that sandbox ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f in task-service has been cleanup successfully" Jul 12 00:07:53.553892 containerd[1693]: time="2025-07-12T00:07:53.553784412Z" level=error msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" failed" error="failed to destroy network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.554231 kubelet[3108]: E0712 00:07:53.553998 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:07:53.554231 kubelet[3108]: E0712 00:07:53.554045 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284"} Jul 12 00:07:53.554231 kubelet[3108]: E0712 00:07:53.554079 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e11846b7-c405-4d2c-8bcf-444534f1feee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.554231 kubelet[3108]: E0712 00:07:53.554104 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e11846b7-c405-4d2c-8bcf-444534f1feee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" podUID="e11846b7-c405-4d2c-8bcf-444534f1feee" Jul 12 00:07:53.555709 containerd[1693]: time="2025-07-12T00:07:53.555402743Z" level=error msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" failed" error="failed to destroy network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.555897 kubelet[3108]: E0712 00:07:53.555581 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:07:53.555897 kubelet[3108]: E0712 00:07:53.555617 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b"} Jul 12 00:07:53.555897 kubelet[3108]: E0712 00:07:53.555645 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f4d993b-f232-4951-84c0-c4c4ed832470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.555897 kubelet[3108]: E0712 00:07:53.555666 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f4d993b-f232-4951-84c0-c4c4ed832470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" podUID="0f4d993b-f232-4951-84c0-c4c4ed832470" Jul 12 00:07:53.559463 containerd[1693]: time="2025-07-12T00:07:53.559346690Z" level=error msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" failed" error="failed to destroy network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.561788 containerd[1693]: time="2025-07-12T00:07:53.561506425Z" level=error msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" failed" error="failed to destroy network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.561858 kubelet[3108]: E0712 00:07:53.561673 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:07:53.561858 kubelet[3108]: E0712 00:07:53.561705 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f"} Jul 12 00:07:53.561858 kubelet[3108]: E0712 00:07:53.561730 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5154b8c8-3461-45dc-b227-a58fdc2acc43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.561858 kubelet[3108]: E0712 00:07:53.561761 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5154b8c8-3461-45dc-b227-a58fdc2acc43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" podUID="5154b8c8-3461-45dc-b227-a58fdc2acc43" Jul 12 00:07:53.562590 kubelet[3108]: E0712 00:07:53.562119 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:07:53.562590 kubelet[3108]: E0712 00:07:53.562280 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294"} Jul 12 00:07:53.562590 kubelet[3108]: E0712 00:07:53.562304 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44b464a5-2b46-4c57-9fe7-32dfead6264d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.562590 kubelet[3108]: E0712 00:07:53.562322 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44b464a5-2b46-4c57-9fe7-32dfead6264d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8v7xh" podUID="44b464a5-2b46-4c57-9fe7-32dfead6264d" Jul 12 00:07:53.562757 containerd[1693]: time="2025-07-12T00:07:53.562282430Z" level=error msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" failed" error="failed to destroy network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.563029 kubelet[3108]: E0712 00:07:53.562849 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:07:53.563029 kubelet[3108]: E0712 00:07:53.562890 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725"} Jul 12 00:07:53.563029 kubelet[3108]: E0712 00:07:53.562948 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.563029 kubelet[3108]: E0712 00:07:53.562968 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nbw2t" podUID="8d45aa45-e8a2-4c24-b1b6-5285c8ed5896" Jul 12 00:07:53.564970 containerd[1693]: time="2025-07-12T00:07:53.564817887Z" level=error msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" failed" error="failed to destroy network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.565049 kubelet[3108]: E0712 00:07:53.564979 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:07:53.565049 kubelet[3108]: E0712 00:07:53.565019 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302"} Jul 12 00:07:53.565106 kubelet[3108]: E0712 00:07:53.565042 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af5f3704-be95-48d7-b031-1bcc62a0d210\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.565106 kubelet[3108]: E0712 00:07:53.565082 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af5f3704-be95-48d7-b031-1bcc62a0d210\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r7rxs" podUID="af5f3704-be95-48d7-b031-1bcc62a0d210" Jul 12 00:07:53.567557 containerd[1693]: time="2025-07-12T00:07:53.567515986Z" level=error msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" failed" error="failed to destroy network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.567850 kubelet[3108]: E0712 00:07:53.567751 3108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:53.567850 kubelet[3108]: E0712 00:07:53.567783 3108 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b"} Jul 12 00:07:53.567850 kubelet[3108]: E0712 00:07:53.567808 3108 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05d2799a-07b3-4f97-85d6-84ce3dde480c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.567850 kubelet[3108]: E0712 00:07:53.567824 3108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05d2799a-07b3-4f97-85d6-84ce3dde480c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fd7dd5445-ztqj8" podUID="05d2799a-07b3-4f97-85d6-84ce3dde480c" Jul 12 00:07:59.000199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189676039.mount: Deactivated successfully. Jul 12 00:07:59.076586 containerd[1693]: time="2025-07-12T00:07:59.076528347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:59.086184 containerd[1693]: time="2025-07-12T00:07:59.086148301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:07:59.091289 containerd[1693]: time="2025-07-12T00:07:59.091192820Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:59.112869 containerd[1693]: time="2025-07-12T00:07:59.112809066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:59.113539 containerd[1693]: time="2025-07-12T00:07:59.113500871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.661888477s" Jul 12 00:07:59.113539 containerd[1693]: time="2025-07-12T00:07:59.113537512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:07:59.124324 containerd[1693]: time="2025-07-12T00:07:59.123699950Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:07:59.257447 containerd[1693]: time="2025-07-12T00:07:59.257329136Z" level=info msg="CreateContainer within sandbox \"07d0e3c90c5bf6429b3538a1b52728eb8ea24525bb6337c5a0988e805801cdaf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"714baab223a61c49922ae6ea8a9cfbcf9593baaddfedb53ea60dda6de2f54902\"" Jul 12 00:07:59.259101 containerd[1693]: time="2025-07-12T00:07:59.258813068Z" level=info msg="StartContainer for \"714baab223a61c49922ae6ea8a9cfbcf9593baaddfedb53ea60dda6de2f54902\"" Jul 12 00:07:59.283403 systemd[1]: Started cri-containerd-714baab223a61c49922ae6ea8a9cfbcf9593baaddfedb53ea60dda6de2f54902.scope - libcontainer container 714baab223a61c49922ae6ea8a9cfbcf9593baaddfedb53ea60dda6de2f54902. Jul 12 00:07:59.314441 containerd[1693]: time="2025-07-12T00:07:59.314310494Z" level=info msg="StartContainer for \"714baab223a61c49922ae6ea8a9cfbcf9593baaddfedb53ea60dda6de2f54902\" returns successfully" Jul 12 00:07:59.605066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:07:59.605239 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:07:59.750391 kubelet[3108]: I0712 00:07:59.750325 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rs5t6" podStartSLOduration=1.2211486360000001 podStartE2EDuration="18.750305363s" podCreationTimestamp="2025-07-12 00:07:41 +0000 UTC" firstStartedPulling="2025-07-12 00:07:41.585530394 +0000 UTC m=+23.388744413" lastFinishedPulling="2025-07-12 00:07:59.114687121 +0000 UTC m=+40.917901140" observedRunningTime="2025-07-12 00:07:59.525182474 +0000 UTC m=+41.328396493" watchObservedRunningTime="2025-07-12 00:07:59.750305363 +0000 UTC m=+41.553519382" Jul 12 00:07:59.751694 containerd[1693]: time="2025-07-12T00:07:59.751642453Z" level=info msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.853 [INFO][4291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.853 [INFO][4291] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" iface="eth0" netns="/var/run/netns/cni-b93804d0-3cb9-ed08-d627-af8b8d38dfcb" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.853 [INFO][4291] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" iface="eth0" netns="/var/run/netns/cni-b93804d0-3cb9-ed08-d627-af8b8d38dfcb" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.854 [INFO][4291] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" iface="eth0" netns="/var/run/netns/cni-b93804d0-3cb9-ed08-d627-af8b8d38dfcb" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.854 [INFO][4291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.854 [INFO][4291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.887 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.887 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.887 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.902 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.902 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.903 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:07:59.908725 containerd[1693]: 2025-07-12 00:07:59.907 [INFO][4291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:07:59.909965 containerd[1693]: time="2025-07-12T00:07:59.908886621Z" level=info msg="TearDown network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" successfully" Jul 12 00:07:59.909965 containerd[1693]: time="2025-07-12T00:07:59.908923061Z" level=info msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" returns successfully" Jul 12 00:07:59.971372 kubelet[3108]: I0712 00:07:59.971263 3108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-ca-bundle\") pod \"05d2799a-07b3-4f97-85d6-84ce3dde480c\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " Jul 12 00:07:59.972887 kubelet[3108]: I0712 00:07:59.972579 3108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-backend-key-pair\") pod \"05d2799a-07b3-4f97-85d6-84ce3dde480c\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " Jul 12 00:07:59.973464 kubelet[3108]: I0712 00:07:59.973367 3108 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h96xf\" (UniqueName: \"kubernetes.io/projected/05d2799a-07b3-4f97-85d6-84ce3dde480c-kube-api-access-h96xf\") pod \"05d2799a-07b3-4f97-85d6-84ce3dde480c\" (UID: \"05d2799a-07b3-4f97-85d6-84ce3dde480c\") " Jul 12 00:07:59.974399 kubelet[3108]: I0712 00:07:59.973283 3108 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "05d2799a-07b3-4f97-85d6-84ce3dde480c" (UID: "05d2799a-07b3-4f97-85d6-84ce3dde480c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:07:59.977244 kubelet[3108]: I0712 00:07:59.977151 3108 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "05d2799a-07b3-4f97-85d6-84ce3dde480c" (UID: "05d2799a-07b3-4f97-85d6-84ce3dde480c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:07:59.977885 kubelet[3108]: I0712 00:07:59.977848 3108 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d2799a-07b3-4f97-85d6-84ce3dde480c-kube-api-access-h96xf" (OuterVolumeSpecName: "kube-api-access-h96xf") pod "05d2799a-07b3-4f97-85d6-84ce3dde480c" (UID: "05d2799a-07b3-4f97-85d6-84ce3dde480c"). InnerVolumeSpecName "kube-api-access-h96xf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:08:00.001912 systemd[1]: run-netns-cni\x2db93804d0\x2d3cb9\x2ded08\x2dd627\x2daf8b8d38dfcb.mount: Deactivated successfully. Jul 12 00:08:00.002020 systemd[1]: var-lib-kubelet-pods-05d2799a\x2d07b3\x2d4f97\x2d85d6\x2d84ce3dde480c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh96xf.mount: Deactivated successfully. Jul 12 00:08:00.002077 systemd[1]: var-lib-kubelet-pods-05d2799a\x2d07b3\x2d4f97\x2d85d6\x2d84ce3dde480c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:08:00.074171 kubelet[3108]: I0712 00:08:00.074131 3108 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h96xf\" (UniqueName: \"kubernetes.io/projected/05d2799a-07b3-4f97-85d6-84ce3dde480c-kube-api-access-h96xf\") on node \"ci-4081.3.4-n-ddca76aad7\" DevicePath \"\"" Jul 12 00:08:00.074171 kubelet[3108]: I0712 00:08:00.074167 3108 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-ca-bundle\") on node \"ci-4081.3.4-n-ddca76aad7\" DevicePath \"\"" Jul 12 00:08:00.074171 kubelet[3108]: I0712 00:08:00.074182 3108 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05d2799a-07b3-4f97-85d6-84ce3dde480c-whisker-backend-key-pair\") on node \"ci-4081.3.4-n-ddca76aad7\" DevicePath \"\"" Jul 12 00:08:00.312954 systemd[1]: Removed slice kubepods-besteffort-pod05d2799a_07b3_4f97_85d6_84ce3dde480c.slice - libcontainer container kubepods-besteffort-pod05d2799a_07b3_4f97_85d6_84ce3dde480c.slice. Jul 12 00:08:00.614807 systemd[1]: Created slice kubepods-besteffort-podfe7f1c77_abe8_4ada_a72a_19890c8a1b9b.slice - libcontainer container kubepods-besteffort-podfe7f1c77_abe8_4ada_a72a_19890c8a1b9b.slice. Jul 12 00:08:00.678741 kubelet[3108]: I0712 00:08:00.678690 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwwr\" (UniqueName: \"kubernetes.io/projected/fe7f1c77-abe8-4ada-a72a-19890c8a1b9b-kube-api-access-vnwwr\") pod \"whisker-59866cdff-hwr4f\" (UID: \"fe7f1c77-abe8-4ada-a72a-19890c8a1b9b\") " pod="calico-system/whisker-59866cdff-hwr4f" Jul 12 00:08:00.678878 kubelet[3108]: I0712 00:08:00.678750 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe7f1c77-abe8-4ada-a72a-19890c8a1b9b-whisker-backend-key-pair\") pod \"whisker-59866cdff-hwr4f\" (UID: \"fe7f1c77-abe8-4ada-a72a-19890c8a1b9b\") " pod="calico-system/whisker-59866cdff-hwr4f" Jul 12 00:08:00.678878 kubelet[3108]: I0712 00:08:00.678773 3108 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe7f1c77-abe8-4ada-a72a-19890c8a1b9b-whisker-ca-bundle\") pod \"whisker-59866cdff-hwr4f\" (UID: \"fe7f1c77-abe8-4ada-a72a-19890c8a1b9b\") " pod="calico-system/whisker-59866cdff-hwr4f" Jul 12 00:08:00.920124 containerd[1693]: time="2025-07-12T00:08:00.920016522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59866cdff-hwr4f,Uid:fe7f1c77-abe8-4ada-a72a-19890c8a1b9b,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:01.114759 systemd-networkd[1409]: calia631a87f643: Link UP Jul 12 00:08:01.116923 systemd-networkd[1409]: calia631a87f643: Gained carrier Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.017 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.029 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0 whisker-59866cdff- calico-system fe7f1c77-abe8-4ada-a72a-19890c8a1b9b 915 0 2025-07-12 00:08:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59866cdff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 whisker-59866cdff-hwr4f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia631a87f643 [] [] }} ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.029 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.050 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" HandleID="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.050 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" HandleID="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"whisker-59866cdff-hwr4f", "timestamp":"2025-07-12 00:08:01.050351618 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.050 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.050 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.050 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.059 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.064 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.069 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.077 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.079 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.079 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.082 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7 Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.089 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.101 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.65/26] block=192.168.95.64/26 handle="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.101 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.65/26] handle="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.101 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:01.139323 containerd[1693]: 2025-07-12 00:08:01.101 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.65/26] IPv6=[] ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" HandleID="k8s-pod-network.d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.107 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0", GenerateName:"whisker-59866cdff-", Namespace:"calico-system", SelfLink:"", UID:"fe7f1c77-abe8-4ada-a72a-19890c8a1b9b", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59866cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"whisker-59866cdff-hwr4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia631a87f643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.107 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.65/32] ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.107 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia631a87f643 ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.116 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.119 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0", GenerateName:"whisker-59866cdff-", Namespace:"calico-system", SelfLink:"", UID:"fe7f1c77-abe8-4ada-a72a-19890c8a1b9b", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59866cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7", Pod:"whisker-59866cdff-hwr4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia631a87f643", MAC:"fa:b7:ac:3f:7b:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:01.139861 containerd[1693]: 2025-07-12 00:08:01.133 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7" Namespace="calico-system" Pod="whisker-59866cdff-hwr4f" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--59866cdff--hwr4f-eth0" Jul 12 00:08:01.174398 containerd[1693]: time="2025-07-12T00:08:01.174069304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:01.174398 containerd[1693]: time="2025-07-12T00:08:01.174137345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:01.174398 containerd[1693]: time="2025-07-12T00:08:01.174151785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:01.174398 containerd[1693]: time="2025-07-12T00:08:01.174285906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:01.210679 systemd[1]: Started cri-containerd-d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7.scope - libcontainer container d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7. Jul 12 00:08:01.264382 containerd[1693]: time="2025-07-12T00:08:01.264334460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59866cdff-hwr4f,Uid:fe7f1c77-abe8-4ada-a72a-19890c8a1b9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7\"" Jul 12 00:08:01.268379 containerd[1693]: time="2025-07-12T00:08:01.268328130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:08:01.293406 kubelet[3108]: I0712 00:08:01.292947 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:01.551236 kernel: bpftool[4529]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:08:01.834945 systemd-networkd[1409]: vxlan.calico: Link UP Jul 12 00:08:01.834953 systemd-networkd[1409]: vxlan.calico: Gained carrier Jul 12 00:08:02.224341 systemd-networkd[1409]: calia631a87f643: Gained IPv6LL Jul 12 00:08:02.308452 kubelet[3108]: I0712 00:08:02.308412 3108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d2799a-07b3-4f97-85d6-84ce3dde480c" path="/var/lib/kubelet/pods/05d2799a-07b3-4f97-85d6-84ce3dde480c/volumes" Jul 12 00:08:02.928532 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Jul 12 00:08:03.075157 containerd[1693]: time="2025-07-12T00:08:03.075101219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.079051 containerd[1693]: time="2025-07-12T00:08:03.079009208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:08:03.089511 containerd[1693]: time="2025-07-12T00:08:03.089439326Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.096518 containerd[1693]: time="2025-07-12T00:08:03.096370938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.097354 containerd[1693]: time="2025-07-12T00:08:03.097313465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.828932695s" Jul 12 00:08:03.097354 containerd[1693]: time="2025-07-12T00:08:03.097348825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:08:03.111248 containerd[1693]: time="2025-07-12T00:08:03.111176969Z" level=info msg="CreateContainer within sandbox \"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:08:03.176902 containerd[1693]: time="2025-07-12T00:08:03.176763820Z" level=info msg="CreateContainer within sandbox \"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d90fa569c99b9f6f585a05ab7d900a21a7d947b4b2e4a40a3f4200e97c9d7be2\"" Jul 12 00:08:03.179256 containerd[1693]: time="2025-07-12T00:08:03.178891836Z" level=info msg="StartContainer for \"d90fa569c99b9f6f585a05ab7d900a21a7d947b4b2e4a40a3f4200e97c9d7be2\"" Jul 12 00:08:03.221359 systemd[1]: Started cri-containerd-d90fa569c99b9f6f585a05ab7d900a21a7d947b4b2e4a40a3f4200e97c9d7be2.scope - libcontainer container d90fa569c99b9f6f585a05ab7d900a21a7d947b4b2e4a40a3f4200e97c9d7be2. Jul 12 00:08:03.260563 containerd[1693]: time="2025-07-12T00:08:03.260260485Z" level=info msg="StartContainer for \"d90fa569c99b9f6f585a05ab7d900a21a7d947b4b2e4a40a3f4200e97c9d7be2\" returns successfully" Jul 12 00:08:03.261661 containerd[1693]: time="2025-07-12T00:08:03.261476814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:08:04.307129 containerd[1693]: time="2025-07-12T00:08:04.307083163Z" level=info msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" Jul 12 00:08:04.308735 containerd[1693]: time="2025-07-12T00:08:04.308507854Z" level=info msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.369 [INFO][4663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.370 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" iface="eth0" netns="/var/run/netns/cni-42ea7c10-edab-b254-5edf-0e8fa6481352" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.370 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" iface="eth0" netns="/var/run/netns/cni-42ea7c10-edab-b254-5edf-0e8fa6481352" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.371 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" iface="eth0" netns="/var/run/netns/cni-42ea7c10-edab-b254-5edf-0e8fa6481352" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.371 [INFO][4663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.371 [INFO][4663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.397 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.398 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.398 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.409 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.410 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.411 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.417838 containerd[1693]: 2025-07-12 00:08:04.415 [INFO][4663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:04.422656 containerd[1693]: time="2025-07-12T00:08:04.420106530Z" level=info msg="TearDown network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" successfully" Jul 12 00:08:04.422656 containerd[1693]: time="2025-07-12T00:08:04.420145930Z" level=info msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" returns successfully" Jul 12 00:08:04.422656 containerd[1693]: time="2025-07-12T00:08:04.422266746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-g8p92,Uid:0f4d993b-f232-4951-84c0-c4c4ed832470,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:04.422030 systemd[1]: run-netns-cni\x2d42ea7c10\x2dedab\x2db254\x2d5edf\x2d0e8fa6481352.mount: Deactivated successfully. Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.377 [INFO][4664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.377 [INFO][4664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" iface="eth0" netns="/var/run/netns/cni-6671b6a2-265a-9f59-48d4-eee741651872" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.377 [INFO][4664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" iface="eth0" netns="/var/run/netns/cni-6671b6a2-265a-9f59-48d4-eee741651872" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.379 [INFO][4664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" iface="eth0" netns="/var/run/netns/cni-6671b6a2-265a-9f59-48d4-eee741651872" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.379 [INFO][4664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.379 [INFO][4664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.412 [INFO][4682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.412 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.412 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.428 [WARNING][4682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.428 [INFO][4682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.430 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.432819 containerd[1693]: 2025-07-12 00:08:04.431 [INFO][4664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:04.435646 containerd[1693]: time="2025-07-12T00:08:04.434513958Z" level=info msg="TearDown network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" successfully" Jul 12 00:08:04.435646 containerd[1693]: time="2025-07-12T00:08:04.434544718Z" level=info msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" returns successfully" Jul 12 00:08:04.435646 containerd[1693]: time="2025-07-12T00:08:04.435159442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8d4c7b8b-5jx6w,Uid:e11846b7-c405-4d2c-8bcf-444534f1feee,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:04.436314 systemd[1]: run-netns-cni\x2d6671b6a2\x2d265a\x2d9f59\x2d48d4\x2deee741651872.mount: Deactivated successfully. Jul 12 00:08:04.673694 systemd-networkd[1409]: calie2c7917eb20: Link UP Jul 12 00:08:04.677308 systemd-networkd[1409]: calie2c7917eb20: Gained carrier Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.582 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0 calico-apiserver-7d46bcb676- calico-apiserver 0f4d993b-f232-4951-84c0-c4c4ed832470 939 0 2025-07-12 00:07:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d46bcb676 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 calico-apiserver-7d46bcb676-g8p92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2c7917eb20 [] [] }} ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.582 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.622 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" HandleID="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.622 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" HandleID="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"calico-apiserver-7d46bcb676-g8p92", "timestamp":"2025-07-12 00:08:04.622006961 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.622 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.622 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.622 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.636 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.640 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.644 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.645 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.647 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.647 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.650 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05 Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.655 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.66/26] block=192.168.95.64/26 handle="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.66/26] handle="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.694430 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.66/26] IPv6=[] ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" HandleID="k8s-pod-network.f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.668 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4d993b-f232-4951-84c0-c4c4ed832470", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"calico-apiserver-7d46bcb676-g8p92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2c7917eb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.668 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.66/32] ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.668 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2c7917eb20 ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.677 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.678 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4d993b-f232-4951-84c0-c4c4ed832470", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05", Pod:"calico-apiserver-7d46bcb676-g8p92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2c7917eb20", MAC:"02:be:bb:b8:95:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.694975 containerd[1693]: 2025-07-12 00:08:04.691 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-g8p92" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:04.723254 containerd[1693]: time="2025-07-12T00:08:04.722574954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:04.723254 containerd[1693]: time="2025-07-12T00:08:04.722622315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:04.723254 containerd[1693]: time="2025-07-12T00:08:04.722632875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.723254 containerd[1693]: time="2025-07-12T00:08:04.722702715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.741427 systemd[1]: Started cri-containerd-f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05.scope - libcontainer container f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05. Jul 12 00:08:04.781120 systemd-networkd[1409]: calidd88b801762: Link UP Jul 12 00:08:04.782766 systemd-networkd[1409]: calidd88b801762: Gained carrier Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.601 [INFO][4700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0 calico-kube-controllers-6b8d4c7b8b- calico-system e11846b7-c405-4d2c-8bcf-444534f1feee 940 0 2025-07-12 00:07:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b8d4c7b8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 calico-kube-controllers-6b8d4c7b8b-5jx6w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd88b801762 [] [] }} ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.602 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.633 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" HandleID="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.633 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" HandleID="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"calico-kube-controllers-6b8d4c7b8b-5jx6w", "timestamp":"2025-07-12 00:08:04.633023284 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.633 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.665 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.737 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.744 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.750 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.752 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.754 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.754 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.756 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28 Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.764 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.773 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.67/26] block=192.168.95.64/26 handle="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.773 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.67/26] handle="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.773 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.807825 containerd[1693]: 2025-07-12 00:08:04.773 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.67/26] IPv6=[] ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" HandleID="k8s-pod-network.a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.777 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0", GenerateName:"calico-kube-controllers-6b8d4c7b8b-", Namespace:"calico-system", SelfLink:"", UID:"e11846b7-c405-4d2c-8bcf-444534f1feee", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8d4c7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"calico-kube-controllers-6b8d4c7b8b-5jx6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd88b801762", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.777 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.67/32] ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.777 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd88b801762 ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.784 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.785 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0", GenerateName:"calico-kube-controllers-6b8d4c7b8b-", Namespace:"calico-system", SelfLink:"", UID:"e11846b7-c405-4d2c-8bcf-444534f1feee", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8d4c7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28", Pod:"calico-kube-controllers-6b8d4c7b8b-5jx6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd88b801762", MAC:"ba:50:d0:6e:f0:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.809483 containerd[1693]: 2025-07-12 00:08:04.800 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28" Namespace="calico-system" Pod="calico-kube-controllers-6b8d4c7b8b-5jx6w" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:04.810546 containerd[1693]: time="2025-07-12T00:08:04.810387332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-g8p92,Uid:0f4d993b-f232-4951-84c0-c4c4ed832470,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05\"" Jul 12 00:08:04.841900 containerd[1693]: time="2025-07-12T00:08:04.841805407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:04.841900 containerd[1693]: time="2025-07-12T00:08:04.841867008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:04.841900 containerd[1693]: time="2025-07-12T00:08:04.841877648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.842170 containerd[1693]: time="2025-07-12T00:08:04.841998249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.860407 systemd[1]: Started cri-containerd-a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28.scope - libcontainer container a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28. Jul 12 00:08:04.893731 containerd[1693]: time="2025-07-12T00:08:04.893681796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8d4c7b8b-5jx6w,Uid:e11846b7-c405-4d2c-8bcf-444534f1feee,Namespace:calico-system,Attempt:1,} returns sandbox id \"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28\"" Jul 12 00:08:05.306458 containerd[1693]: time="2025-07-12T00:08:05.306393966Z" level=info msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" Jul 12 00:08:05.306744 containerd[1693]: time="2025-07-12T00:08:05.306565167Z" level=info msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.372 [INFO][4849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.373 [INFO][4849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" iface="eth0" netns="/var/run/netns/cni-a9a5d05d-c4c6-7fe4-7002-e6a8e598011f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" iface="eth0" netns="/var/run/netns/cni-a9a5d05d-c4c6-7fe4-7002-e6a8e598011f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" iface="eth0" netns="/var/run/netns/cni-a9a5d05d-c4c6-7fe4-7002-e6a8e598011f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.403 [INFO][4859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.403 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.404 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.414 [WARNING][4859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.414 [INFO][4859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.415 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.428703 containerd[1693]: 2025-07-12 00:08:05.422 [INFO][4849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:05.430788 containerd[1693]: time="2025-07-12T00:08:05.429309406Z" level=info msg="TearDown network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" successfully" Jul 12 00:08:05.430788 containerd[1693]: time="2025-07-12T00:08:05.429339967Z" level=info msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" returns successfully" Jul 12 00:08:05.432904 systemd[1]: run-netns-cni\x2da9a5d05d\x2dc4c6\x2d7fe4\x2d7002\x2de6a8e598011f.mount: Deactivated successfully. Jul 12 00:08:05.438041 containerd[1693]: time="2025-07-12T00:08:05.438009391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-fkjjh,Uid:5154b8c8-3461-45dc-b227-a58fdc2acc43,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.373 [INFO][4842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" iface="eth0" netns="/var/run/netns/cni-271930bc-1488-f2b2-3999-9c91bb12704a" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.374 [INFO][4842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" iface="eth0" netns="/var/run/netns/cni-271930bc-1488-f2b2-3999-9c91bb12704a" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.375 [INFO][4842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" iface="eth0" netns="/var/run/netns/cni-271930bc-1488-f2b2-3999-9c91bb12704a" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.375 [INFO][4842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.375 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.404 [INFO][4861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.405 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.415 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.428 [WARNING][4861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.430 [INFO][4861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.432 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.442048 containerd[1693]: 2025-07-12 00:08:05.438 [INFO][4842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:05.444719 containerd[1693]: time="2025-07-12T00:08:05.442261903Z" level=info msg="TearDown network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" successfully" Jul 12 00:08:05.444719 containerd[1693]: time="2025-07-12T00:08:05.442288624Z" level=info msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" returns successfully" Jul 12 00:08:05.445607 containerd[1693]: time="2025-07-12T00:08:05.445380207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nbw2t,Uid:8d45aa45-e8a2-4c24-b1b6-5285c8ed5896,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:05.446242 systemd[1]: run-netns-cni\x2d271930bc\x2d1488\x2df2b2\x2d3999\x2d9c91bb12704a.mount: Deactivated successfully. Jul 12 00:08:05.658467 systemd-networkd[1409]: caliddbf88a466f: Link UP Jul 12 00:08:05.660532 systemd-networkd[1409]: caliddbf88a466f: Gained carrier Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.570 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0 calico-apiserver-7d46bcb676- calico-apiserver 5154b8c8-3461-45dc-b227-a58fdc2acc43 955 0 2025-07-12 00:07:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d46bcb676 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 calico-apiserver-7d46bcb676-fkjjh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliddbf88a466f [] [] }} ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.570 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.609 [INFO][4896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" HandleID="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.609 [INFO][4896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" HandleID="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"calico-apiserver-7d46bcb676-fkjjh", "timestamp":"2025-07-12 00:08:05.609090672 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.609 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.609 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.609 [INFO][4896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.620 [INFO][4896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.626 [INFO][4896] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.630 [INFO][4896] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.631 [INFO][4896] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.633 [INFO][4896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.633 [INFO][4896] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.635 [INFO][4896] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.641 [INFO][4896] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.650 [INFO][4896] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.68/26] block=192.168.95.64/26 handle="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.650 [INFO][4896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.68/26] handle="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.650 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.679743 containerd[1693]: 2025-07-12 00:08:05.650 [INFO][4896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.68/26] IPv6=[] ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" HandleID="k8s-pod-network.799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.653 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"5154b8c8-3461-45dc-b227-a58fdc2acc43", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"calico-apiserver-7d46bcb676-fkjjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddbf88a466f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.653 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.68/32] ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.653 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddbf88a466f ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.659 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.661 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"5154b8c8-3461-45dc-b227-a58fdc2acc43", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c", Pod:"calico-apiserver-7d46bcb676-fkjjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddbf88a466f", MAC:"d2:d5:65:ff:5f:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.681411 containerd[1693]: 2025-07-12 00:08:05.676 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c" Namespace="calico-apiserver" Pod="calico-apiserver-7d46bcb676-fkjjh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:05.704679 containerd[1693]: time="2025-07-12T00:08:05.704423546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:05.704679 containerd[1693]: time="2025-07-12T00:08:05.704482707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:05.704679 containerd[1693]: time="2025-07-12T00:08:05.704497747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.705152 containerd[1693]: time="2025-07-12T00:08:05.704805669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.727927 systemd[1]: Started cri-containerd-799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c.scope - libcontainer container 799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c. Jul 12 00:08:05.768499 systemd-networkd[1409]: calidc05d1f703a: Link UP Jul 12 00:08:05.769844 systemd-networkd[1409]: calidc05d1f703a: Gained carrier Jul 12 00:08:05.775851 containerd[1693]: time="2025-07-12T00:08:05.775805281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d46bcb676-fkjjh,Uid:5154b8c8-3461-45dc-b227-a58fdc2acc43,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c\"" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.588 [INFO][4882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0 coredns-668d6bf9bc- kube-system 8d45aa45-e8a2-4c24-b1b6-5285c8ed5896 954 0 2025-07-12 00:07:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 coredns-668d6bf9bc-nbw2t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc05d1f703a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.589 [INFO][4882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.629 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" HandleID="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.629 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" HandleID="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"coredns-668d6bf9bc-nbw2t", "timestamp":"2025-07-12 00:08:05.629025542 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.629 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.651 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.651 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.722 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.729 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.735 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.737 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.739 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.739 [INFO][4901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.742 [INFO][4901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5 Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.749 [INFO][4901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.761 [INFO][4901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.69/26] block=192.168.95.64/26 handle="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.761 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.69/26] handle="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.761 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.799324 containerd[1693]: 2025-07-12 00:08:05.761 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.69/26] IPv6=[] ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" HandleID="k8s-pod-network.e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.765 [INFO][4882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"coredns-668d6bf9bc-nbw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc05d1f703a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.765 [INFO][4882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.69/32] ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.765 [INFO][4882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc05d1f703a ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.772 [INFO][4882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.775 [INFO][4882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5", Pod:"coredns-668d6bf9bc-nbw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc05d1f703a", MAC:"e6:b2:08:2a:ae:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.801051 containerd[1693]: 2025-07-12 00:08:05.795 [INFO][4882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5" Namespace="kube-system" Pod="coredns-668d6bf9bc-nbw2t" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:05.829004 containerd[1693]: time="2025-07-12T00:08:05.828831638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:05.829176 containerd[1693]: time="2025-07-12T00:08:05.828976919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:05.829176 containerd[1693]: time="2025-07-12T00:08:05.829069840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.829301 containerd[1693]: time="2025-07-12T00:08:05.829194561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.848463 systemd[1]: Started cri-containerd-e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5.scope - libcontainer container e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5. Jul 12 00:08:05.880362 containerd[1693]: time="2025-07-12T00:08:05.880328503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nbw2t,Uid:8d45aa45-e8a2-4c24-b1b6-5285c8ed5896,Namespace:kube-system,Attempt:1,} returns sandbox id \"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5\"" Jul 12 00:08:05.884258 containerd[1693]: time="2025-07-12T00:08:05.883877890Z" level=info msg="CreateContainer within sandbox \"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:05.934533 containerd[1693]: time="2025-07-12T00:08:05.934420789Z" level=info msg="CreateContainer within sandbox \"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bf02773f3fcd3a1b172cca05d651446c788a3f3dcfff2e69684bdbf3b2d0a8f\"" Jul 12 00:08:05.935841 containerd[1693]: time="2025-07-12T00:08:05.935414396Z" level=info msg="StartContainer for \"0bf02773f3fcd3a1b172cca05d651446c788a3f3dcfff2e69684bdbf3b2d0a8f\"" Jul 12 00:08:05.962467 systemd[1]: Started cri-containerd-0bf02773f3fcd3a1b172cca05d651446c788a3f3dcfff2e69684bdbf3b2d0a8f.scope - libcontainer container 0bf02773f3fcd3a1b172cca05d651446c788a3f3dcfff2e69684bdbf3b2d0a8f. Jul 12 00:08:05.990092 containerd[1693]: time="2025-07-12T00:08:05.990030965Z" level=info msg="StartContainer for \"0bf02773f3fcd3a1b172cca05d651446c788a3f3dcfff2e69684bdbf3b2d0a8f\" returns successfully" Jul 12 00:08:06.311455 containerd[1693]: time="2025-07-12T00:08:06.310432164Z" level=info msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" Jul 12 00:08:06.321161 systemd-networkd[1409]: calie2c7917eb20: Gained IPv6LL Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.363 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.364 [INFO][5056] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" iface="eth0" netns="/var/run/netns/cni-9bb1afa5-71fd-8fd1-1222-39b5860cf79d" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.365 [INFO][5056] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" iface="eth0" netns="/var/run/netns/cni-9bb1afa5-71fd-8fd1-1222-39b5860cf79d" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.365 [INFO][5056] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" iface="eth0" netns="/var/run/netns/cni-9bb1afa5-71fd-8fd1-1222-39b5860cf79d" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.365 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.365 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.386 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.386 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.386 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.397 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.397 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.399 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:06.405179 containerd[1693]: 2025-07-12 00:08:06.400 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:06.405179 containerd[1693]: time="2025-07-12T00:08:06.405229834Z" level=info msg="TearDown network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" successfully" Jul 12 00:08:06.405179 containerd[1693]: time="2025-07-12T00:08:06.405285714Z" level=info msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" returns successfully" Jul 12 00:08:06.414327 containerd[1693]: time="2025-07-12T00:08:06.413394655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8v7xh,Uid:44b464a5-2b46-4c57-9fe7-32dfead6264d,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:06.431859 systemd[1]: run-netns-cni\x2d9bb1afa5\x2d71fd\x2d8fd1\x2d1222\x2d39b5860cf79d.mount: Deactivated successfully. Jul 12 00:08:06.560780 kubelet[3108]: I0712 00:08:06.560264 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nbw2t" podStartSLOduration=42.560248315 podStartE2EDuration="42.560248315s" podCreationTimestamp="2025-07-12 00:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:06.560089673 +0000 UTC m=+48.363303692" watchObservedRunningTime="2025-07-12 00:08:06.560248315 +0000 UTC m=+48.363462374" Jul 12 00:08:06.576773 systemd-networkd[1409]: calidd88b801762: Gained IPv6LL Jul 12 00:08:07.152479 systemd-networkd[1409]: caliddbf88a466f: Gained IPv6LL Jul 12 00:08:07.307305 containerd[1693]: time="2025-07-12T00:08:07.306138820Z" level=info msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" Jul 12 00:08:07.505094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681367097.mount: Deactivated successfully. Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.498 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.498 [INFO][5093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" iface="eth0" netns="/var/run/netns/cni-8577b277-20c5-7792-496d-aff30fe2cf40" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.498 [INFO][5093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" iface="eth0" netns="/var/run/netns/cni-8577b277-20c5-7792-496d-aff30fe2cf40" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.499 [INFO][5093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" iface="eth0" netns="/var/run/netns/cni-8577b277-20c5-7792-496d-aff30fe2cf40" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.499 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.499 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.523 [INFO][5101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.523 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.523 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.532 [WARNING][5101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.532 [INFO][5101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.533 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:07.538283 containerd[1693]: 2025-07-12 00:08:07.536 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:07.539849 containerd[1693]: time="2025-07-12T00:08:07.538999843Z" level=info msg="TearDown network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" successfully" Jul 12 00:08:07.539849 containerd[1693]: time="2025-07-12T00:08:07.539029003Z" level=info msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" returns successfully" Jul 12 00:08:07.542302 containerd[1693]: time="2025-07-12T00:08:07.540443494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q7pd,Uid:058503a3-83aa-47a4-b834-2e39d5989b2c,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:07.540961 systemd[1]: run-netns-cni\x2d8577b277\x2d20c5\x2d7792\x2d496d\x2daff30fe2cf40.mount: Deactivated successfully. Jul 12 00:08:07.792389 systemd-networkd[1409]: calidc05d1f703a: Gained IPv6LL Jul 12 00:08:08.089464 containerd[1693]: time="2025-07-12T00:08:08.089311044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.096634 containerd[1693]: time="2025-07-12T00:08:08.096419417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:08:08.101850 containerd[1693]: time="2025-07-12T00:08:08.101797777Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.106752 containerd[1693]: time="2025-07-12T00:08:08.106688254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.108240 containerd[1693]: time="2025-07-12T00:08:08.107429340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.845883165s" Jul 12 00:08:08.108240 containerd[1693]: time="2025-07-12T00:08:08.107469540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:08:08.113467 containerd[1693]: time="2025-07-12T00:08:08.113348384Z" level=info msg="CreateContainer within sandbox \"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:08:08.113801 containerd[1693]: time="2025-07-12T00:08:08.113670506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:08.120785 systemd-networkd[1409]: cali94136ea498e: Link UP Jul 12 00:08:08.121532 systemd-networkd[1409]: cali94136ea498e: Gained carrier Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.017 [INFO][5107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0 goldmane-768f4c5c69- calico-system 44b464a5-2b46-4c57-9fe7-32dfead6264d 967 0 2025-07-12 00:07:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 goldmane-768f4c5c69-8v7xh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali94136ea498e [] [] }} ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.017 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.049 [INFO][5123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" HandleID="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.049 [INFO][5123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" HandleID="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa300), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"goldmane-768f4c5c69-8v7xh", "timestamp":"2025-07-12 00:08:08.049495106 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.049 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.049 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.049 [INFO][5123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.059 [INFO][5123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.064 [INFO][5123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.069 [INFO][5123] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.072 [INFO][5123] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.075 [INFO][5123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.076 [INFO][5123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.079 [INFO][5123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892 Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.093 [INFO][5123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.109 [INFO][5123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.70/26] block=192.168.95.64/26 handle="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.109 [INFO][5123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.70/26] handle="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.109 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:08.147461 containerd[1693]: 2025-07-12 00:08:08.109 [INFO][5123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.70/26] IPv6=[] ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" HandleID="k8s-pod-network.bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.114 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"44b464a5-2b46-4c57-9fe7-32dfead6264d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"goldmane-768f4c5c69-8v7xh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94136ea498e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.115 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.70/32] ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.115 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94136ea498e ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.122 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.124 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"44b464a5-2b46-4c57-9fe7-32dfead6264d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892", Pod:"goldmane-768f4c5c69-8v7xh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94136ea498e", MAC:"2e:2a:2c:e9:3e:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.149963 containerd[1693]: 2025-07-12 00:08:08.142 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892" Namespace="calico-system" Pod="goldmane-768f4c5c69-8v7xh" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:08.197261 containerd[1693]: time="2025-07-12T00:08:08.197161291Z" level=info msg="CreateContainer within sandbox \"d576f8aac233002f30a28b026b01de0d6edff8700b0bb37080d3afa2f0a187a7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ec23d3f65b18a4f3dbf826676d9d6ebd98bdd1737f2d480f8d7c347dceb4a6ba\"" Jul 12 00:08:08.198855 containerd[1693]: time="2025-07-12T00:08:08.198828864Z" level=info msg="StartContainer for \"ec23d3f65b18a4f3dbf826676d9d6ebd98bdd1737f2d480f8d7c347dceb4a6ba\"" Jul 12 00:08:08.235198 containerd[1693]: time="2025-07-12T00:08:08.235099095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:08.235673 containerd[1693]: time="2025-07-12T00:08:08.235156776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:08.236107 containerd[1693]: time="2025-07-12T00:08:08.236046063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.236283 containerd[1693]: time="2025-07-12T00:08:08.236162703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.242394 systemd[1]: Started cri-containerd-ec23d3f65b18a4f3dbf826676d9d6ebd98bdd1737f2d480f8d7c347dceb4a6ba.scope - libcontainer container ec23d3f65b18a4f3dbf826676d9d6ebd98bdd1737f2d480f8d7c347dceb4a6ba. Jul 12 00:08:08.255395 systemd[1]: Started cri-containerd-bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892.scope - libcontainer container bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892. Jul 12 00:08:08.266937 systemd-networkd[1409]: cali33085f2df4f: Link UP Jul 12 00:08:08.269513 systemd-networkd[1409]: cali33085f2df4f: Gained carrier Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.123 [INFO][5129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0 csi-node-driver- calico-system 058503a3-83aa-47a4-b834-2e39d5989b2c 982 0 2025-07-12 00:07:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 csi-node-driver-9q7pd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali33085f2df4f [] [] }} ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.124 [INFO][5129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.169 [INFO][5147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" HandleID="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.169 [INFO][5147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" HandleID="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"csi-node-driver-9q7pd", "timestamp":"2025-07-12 00:08:08.169644165 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.169 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.169 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.169 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.179 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.183 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.191 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.196 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.202 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.202 [INFO][5147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.205 [INFO][5147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598 Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.214 [INFO][5147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.251 [INFO][5147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.71/26] block=192.168.95.64/26 handle="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.251 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.71/26] handle="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.252 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:08.293061 containerd[1693]: 2025-07-12 00:08:08.252 [INFO][5147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.71/26] IPv6=[] ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" HandleID="k8s-pod-network.6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.254 [INFO][5129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"058503a3-83aa-47a4-b834-2e39d5989b2c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"csi-node-driver-9q7pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33085f2df4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.255 [INFO][5129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.71/32] ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.255 [INFO][5129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33085f2df4f ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.271 [INFO][5129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.273 [INFO][5129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"058503a3-83aa-47a4-b834-2e39d5989b2c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598", Pod:"csi-node-driver-9q7pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33085f2df4f", MAC:"fe:7b:94:94:9a:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.294550 containerd[1693]: 2025-07-12 00:08:08.290 [INFO][5129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598" Namespace="calico-system" Pod="csi-node-driver-9q7pd" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:08.315788 containerd[1693]: time="2025-07-12T00:08:08.315709579Z" level=info msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" Jul 12 00:08:08.335689 containerd[1693]: time="2025-07-12T00:08:08.334590520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:08.336231 containerd[1693]: time="2025-07-12T00:08:08.335980811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:08.336231 containerd[1693]: time="2025-07-12T00:08:08.336000411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.336898 containerd[1693]: time="2025-07-12T00:08:08.336719656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.342550 containerd[1693]: time="2025-07-12T00:08:08.341457012Z" level=info msg="StartContainer for \"ec23d3f65b18a4f3dbf826676d9d6ebd98bdd1737f2d480f8d7c347dceb4a6ba\" returns successfully" Jul 12 00:08:08.360274 systemd[1]: Started cri-containerd-6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598.scope - libcontainer container 6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598. Jul 12 00:08:08.378916 containerd[1693]: time="2025-07-12T00:08:08.378879532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8v7xh,Uid:44b464a5-2b46-4c57-9fe7-32dfead6264d,Namespace:calico-system,Attempt:1,} returns sandbox id \"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892\"" Jul 12 00:08:08.409642 containerd[1693]: time="2025-07-12T00:08:08.409569282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q7pd,Uid:058503a3-83aa-47a4-b834-2e39d5989b2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598\"" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.442 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.442 [INFO][5276] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" iface="eth0" netns="/var/run/netns/cni-cfbd8423-e53a-b292-9187-bd60efcb7fe1" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.444 [INFO][5276] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" iface="eth0" netns="/var/run/netns/cni-cfbd8423-e53a-b292-9187-bd60efcb7fe1" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.446 [INFO][5276] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" iface="eth0" netns="/var/run/netns/cni-cfbd8423-e53a-b292-9187-bd60efcb7fe1" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.446 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.446 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.465 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.465 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.465 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.473 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.474 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.475 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:08.478871 containerd[1693]: 2025-07-12 00:08:08.477 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:08.479598 containerd[1693]: time="2025-07-12T00:08:08.479324444Z" level=info msg="TearDown network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" successfully" Jul 12 00:08:08.479598 containerd[1693]: time="2025-07-12T00:08:08.479358444Z" level=info msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" returns successfully" Jul 12 00:08:08.480061 containerd[1693]: time="2025-07-12T00:08:08.480029049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r7rxs,Uid:af5f3704-be95-48d7-b031-1bcc62a0d210,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:08.512152 systemd[1]: run-netns-cni\x2dcfbd8423\x2de53a\x2db292\x2d9187\x2dbd60efcb7fe1.mount: Deactivated successfully. Jul 12 00:08:08.567302 kubelet[3108]: I0712 00:08:08.567240 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59866cdff-hwr4f" podStartSLOduration=1.722913974 podStartE2EDuration="8.567220942s" podCreationTimestamp="2025-07-12 00:08:00 +0000 UTC" firstStartedPulling="2025-07-12 00:08:01.266554437 +0000 UTC m=+43.069768456" lastFinishedPulling="2025-07-12 00:08:08.110861405 +0000 UTC m=+49.914075424" observedRunningTime="2025-07-12 00:08:08.566985221 +0000 UTC m=+50.370199240" watchObservedRunningTime="2025-07-12 00:08:08.567220942 +0000 UTC m=+50.370434961" Jul 12 00:08:08.738574 systemd-networkd[1409]: cali772958c4d7b: Link UP Jul 12 00:08:08.739336 systemd-networkd[1409]: cali772958c4d7b: Gained carrier Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.643 [INFO][5324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0 coredns-668d6bf9bc- kube-system af5f3704-be95-48d7-b031-1bcc62a0d210 995 0 2025-07-12 00:07:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-ddca76aad7 coredns-668d6bf9bc-r7rxs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali772958c4d7b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.643 [INFO][5324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.678 [INFO][5336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" HandleID="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.678 [INFO][5336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" HandleID="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-ddca76aad7", "pod":"coredns-668d6bf9bc-r7rxs", "timestamp":"2025-07-12 00:08:08.678086093 +0000 UTC"}, Hostname:"ci-4081.3.4-n-ddca76aad7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.678 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.678 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.678 [INFO][5336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-ddca76aad7' Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.692 [INFO][5336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.697 [INFO][5336] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.703 [INFO][5336] ipam/ipam.go 511: Trying affinity for 192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.704 [INFO][5336] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.709 [INFO][5336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.64/26 host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.709 [INFO][5336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.64/26 handle="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.710 [INFO][5336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.721 [INFO][5336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.64/26 handle="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.731 [INFO][5336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.72/26] block=192.168.95.64/26 handle="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.731 [INFO][5336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.72/26] handle="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" host="ci-4081.3.4-n-ddca76aad7" Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.731 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:08.768891 containerd[1693]: 2025-07-12 00:08:08.731 [INFO][5336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.72/26] IPv6=[] ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" HandleID="k8s-pod-network.96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.734 [INFO][5324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af5f3704-be95-48d7-b031-1bcc62a0d210", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"", Pod:"coredns-668d6bf9bc-r7rxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772958c4d7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.734 [INFO][5324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.72/32] ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.734 [INFO][5324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772958c4d7b ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.743 [INFO][5324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.746 [INFO][5324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af5f3704-be95-48d7-b031-1bcc62a0d210", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d", Pod:"coredns-668d6bf9bc-r7rxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772958c4d7b", MAC:"92:31:82:5a:0c:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:08.769503 containerd[1693]: 2025-07-12 00:08:08.766 [INFO][5324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d" Namespace="kube-system" Pod="coredns-668d6bf9bc-r7rxs" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:08.820375 containerd[1693]: time="2025-07-12T00:08:08.820129606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:08.820375 containerd[1693]: time="2025-07-12T00:08:08.820194366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:08.820375 containerd[1693]: time="2025-07-12T00:08:08.820227207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.820375 containerd[1693]: time="2025-07-12T00:08:08.820321927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:08.846380 systemd[1]: Started cri-containerd-96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d.scope - libcontainer container 96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d. Jul 12 00:08:08.881759 containerd[1693]: time="2025-07-12T00:08:08.881467223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r7rxs,Uid:af5f3704-be95-48d7-b031-1bcc62a0d210,Namespace:kube-system,Attempt:1,} returns sandbox id \"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d\"" Jul 12 00:08:08.885781 containerd[1693]: time="2025-07-12T00:08:08.885739489Z" level=info msg="CreateContainer within sandbox \"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:08.938454 containerd[1693]: time="2025-07-12T00:08:08.938328172Z" level=info msg="CreateContainer within sandbox \"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3eff8f60cc6452278f68950edb3e11eb6b3e98d6bc1a26a47a354ae5b394768\"" Jul 12 00:08:08.939501 containerd[1693]: time="2025-07-12T00:08:08.939420979Z" level=info msg="StartContainer for \"b3eff8f60cc6452278f68950edb3e11eb6b3e98d6bc1a26a47a354ae5b394768\"" Jul 12 00:08:08.964404 systemd[1]: Started cri-containerd-b3eff8f60cc6452278f68950edb3e11eb6b3e98d6bc1a26a47a354ae5b394768.scope - libcontainer container b3eff8f60cc6452278f68950edb3e11eb6b3e98d6bc1a26a47a354ae5b394768. Jul 12 00:08:09.002273 containerd[1693]: time="2025-07-12T00:08:09.001836482Z" level=info msg="StartContainer for \"b3eff8f60cc6452278f68950edb3e11eb6b3e98d6bc1a26a47a354ae5b394768\" returns successfully" Jul 12 00:08:09.569099 kubelet[3108]: I0712 00:08:09.568360 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r7rxs" podStartSLOduration=45.568341362 podStartE2EDuration="45.568341362s" podCreationTimestamp="2025-07-12 00:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:09.56797816 +0000 UTC m=+51.371192179" watchObservedRunningTime="2025-07-12 00:08:09.568341362 +0000 UTC m=+51.371555341" Jul 12 00:08:09.648398 systemd-networkd[1409]: cali33085f2df4f: Gained IPv6LL Jul 12 00:08:09.905418 systemd-networkd[1409]: cali772958c4d7b: Gained IPv6LL Jul 12 00:08:09.968820 systemd-networkd[1409]: cali94136ea498e: Gained IPv6LL Jul 12 00:08:10.890238 containerd[1693]: time="2025-07-12T00:08:10.887899029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.898564 containerd[1693]: time="2025-07-12T00:08:10.898522494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:08:10.910964 containerd[1693]: time="2025-07-12T00:08:10.910921530Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.918962 containerd[1693]: time="2025-07-12T00:08:10.917899333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.918962 containerd[1693]: time="2025-07-12T00:08:10.918664698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.804600309s" Jul 12 00:08:10.918962 containerd[1693]: time="2025-07-12T00:08:10.918693338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:10.920251 containerd[1693]: time="2025-07-12T00:08:10.920223427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:08:10.930376 containerd[1693]: time="2025-07-12T00:08:10.929412524Z" level=info msg="CreateContainer within sandbox \"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:10.983299 containerd[1693]: time="2025-07-12T00:08:10.983246735Z" level=info msg="CreateContainer within sandbox \"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"77200cbdac90ad0df7fa4bf15e03e308d6dcb85f5578293efbe8a386c57118ef\"" Jul 12 00:08:10.985136 containerd[1693]: time="2025-07-12T00:08:10.985083466Z" level=info msg="StartContainer for \"77200cbdac90ad0df7fa4bf15e03e308d6dcb85f5578293efbe8a386c57118ef\"" Jul 12 00:08:11.023427 systemd[1]: Started cri-containerd-77200cbdac90ad0df7fa4bf15e03e308d6dcb85f5578293efbe8a386c57118ef.scope - libcontainer container 77200cbdac90ad0df7fa4bf15e03e308d6dcb85f5578293efbe8a386c57118ef. Jul 12 00:08:11.062177 containerd[1693]: time="2025-07-12T00:08:11.062120059Z" level=info msg="StartContainer for \"77200cbdac90ad0df7fa4bf15e03e308d6dcb85f5578293efbe8a386c57118ef\" returns successfully" Jul 12 00:08:12.556567 kubelet[3108]: I0712 00:08:12.556533 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:13.001174 containerd[1693]: time="2025-07-12T00:08:13.001092582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.007242 containerd[1693]: time="2025-07-12T00:08:13.006999746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:08:13.023761 containerd[1693]: time="2025-07-12T00:08:13.021388215Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.032763 containerd[1693]: time="2025-07-12T00:08:13.032276377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.036343 containerd[1693]: time="2025-07-12T00:08:13.035610042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.113165721s" Jul 12 00:08:13.037577 containerd[1693]: time="2025-07-12T00:08:13.037342015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:08:13.063247 containerd[1693]: time="2025-07-12T00:08:13.062371604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:13.066742 containerd[1693]: time="2025-07-12T00:08:13.066694956Z" level=info msg="CreateContainer within sandbox \"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:08:13.153702 containerd[1693]: time="2025-07-12T00:08:13.153636012Z" level=info msg="CreateContainer within sandbox \"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1\"" Jul 12 00:08:13.155083 containerd[1693]: time="2025-07-12T00:08:13.154437098Z" level=info msg="StartContainer for \"a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1\"" Jul 12 00:08:13.183388 systemd[1]: Started cri-containerd-a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1.scope - libcontainer container a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1. Jul 12 00:08:13.227570 containerd[1693]: time="2025-07-12T00:08:13.227519969Z" level=info msg="StartContainer for \"a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1\" returns successfully" Jul 12 00:08:13.578121 kubelet[3108]: I0712 00:08:13.577806 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b8d4c7b8b-5jx6w" podStartSLOduration=24.43006308 podStartE2EDuration="32.57778605s" podCreationTimestamp="2025-07-12 00:07:41 +0000 UTC" firstStartedPulling="2025-07-12 00:08:04.896419336 +0000 UTC m=+46.699633355" lastFinishedPulling="2025-07-12 00:08:13.044142306 +0000 UTC m=+54.847356325" observedRunningTime="2025-07-12 00:08:13.577475527 +0000 UTC m=+55.380689546" watchObservedRunningTime="2025-07-12 00:08:13.57778605 +0000 UTC m=+55.381000109" Jul 12 00:08:13.578121 kubelet[3108]: I0712 00:08:13.577905 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d46bcb676-g8p92" podStartSLOduration=31.472535469 podStartE2EDuration="37.577900731s" podCreationTimestamp="2025-07-12 00:07:36 +0000 UTC" firstStartedPulling="2025-07-12 00:08:04.814504243 +0000 UTC m=+46.617718222" lastFinishedPulling="2025-07-12 00:08:10.919869505 +0000 UTC m=+52.723083484" observedRunningTime="2025-07-12 00:08:11.578160349 +0000 UTC m=+53.381374368" watchObservedRunningTime="2025-07-12 00:08:13.577900731 +0000 UTC m=+55.381114830" Jul 12 00:08:14.113758 containerd[1693]: time="2025-07-12T00:08:14.113700090Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.118805 containerd[1693]: time="2025-07-12T00:08:14.118763208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:08:14.120537 containerd[1693]: time="2025-07-12T00:08:14.120477221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.056593326s" Jul 12 00:08:14.120537 containerd[1693]: time="2025-07-12T00:08:14.120530462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:14.122144 containerd[1693]: time="2025-07-12T00:08:14.122080513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:08:14.122919 containerd[1693]: time="2025-07-12T00:08:14.122851799Z" level=info msg="CreateContainer within sandbox \"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:14.201586 containerd[1693]: time="2025-07-12T00:08:14.201523392Z" level=info msg="CreateContainer within sandbox \"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a5721668d5de43ac0fc226efa96888d3632e2ce263c96ad4b11978ad89e8f72f\"" Jul 12 00:08:14.202687 containerd[1693]: time="2025-07-12T00:08:14.202649521Z" level=info msg="StartContainer for \"a5721668d5de43ac0fc226efa96888d3632e2ce263c96ad4b11978ad89e8f72f\"" Jul 12 00:08:14.256394 systemd[1]: Started cri-containerd-a5721668d5de43ac0fc226efa96888d3632e2ce263c96ad4b11978ad89e8f72f.scope - libcontainer container a5721668d5de43ac0fc226efa96888d3632e2ce263c96ad4b11978ad89e8f72f. Jul 12 00:08:14.293516 containerd[1693]: time="2025-07-12T00:08:14.292630479Z" level=info msg="StartContainer for \"a5721668d5de43ac0fc226efa96888d3632e2ce263c96ad4b11978ad89e8f72f\" returns successfully" Jul 12 00:08:15.567584 kubelet[3108]: I0712 00:08:15.567236 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:16.018565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081059674.mount: Deactivated successfully. Jul 12 00:08:17.960314 containerd[1693]: time="2025-07-12T00:08:17.959924267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.964435 containerd[1693]: time="2025-07-12T00:08:17.963724605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:08:17.969418 containerd[1693]: time="2025-07-12T00:08:17.969357153Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.979234 containerd[1693]: time="2025-07-12T00:08:17.978994640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:17.984417 containerd[1693]: time="2025-07-12T00:08:17.981926615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.8598003s" Jul 12 00:08:17.984417 containerd[1693]: time="2025-07-12T00:08:17.981970615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:08:17.988236 containerd[1693]: time="2025-07-12T00:08:17.987598282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:08:17.992450 containerd[1693]: time="2025-07-12T00:08:17.992414746Z" level=info msg="CreateContainer within sandbox \"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:08:18.033936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265530674.mount: Deactivated successfully. Jul 12 00:08:18.062155 containerd[1693]: time="2025-07-12T00:08:18.062104767Z" level=info msg="CreateContainer within sandbox \"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4\"" Jul 12 00:08:18.064024 containerd[1693]: time="2025-07-12T00:08:18.062727530Z" level=info msg="StartContainer for \"65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4\"" Jul 12 00:08:18.097375 systemd[1]: Started cri-containerd-65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4.scope - libcontainer container 65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4. Jul 12 00:08:18.135089 containerd[1693]: time="2025-07-12T00:08:18.135042364Z" level=info msg="StartContainer for \"65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4\" returns successfully" Jul 12 00:08:18.297179 containerd[1693]: time="2025-07-12T00:08:18.297136157Z" level=info msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.337 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"5154b8c8-3461-45dc-b227-a58fdc2acc43", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c", Pod:"calico-apiserver-7d46bcb676-fkjjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddbf88a466f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.337 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.337 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" iface="eth0" netns="" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.337 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.337 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.360 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.360 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.361 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.369 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.369 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.370 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.374533 containerd[1693]: 2025-07-12 00:08:18.371 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.374533 containerd[1693]: time="2025-07-12T00:08:18.374165494Z" level=info msg="TearDown network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" successfully" Jul 12 00:08:18.374533 containerd[1693]: time="2025-07-12T00:08:18.374200094Z" level=info msg="StopPodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" returns successfully" Jul 12 00:08:18.376186 containerd[1693]: time="2025-07-12T00:08:18.375540661Z" level=info msg="RemovePodSandbox for \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" Jul 12 00:08:18.376186 containerd[1693]: time="2025-07-12T00:08:18.375576221Z" level=info msg="Forcibly stopping sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\"" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.419 [WARNING][5679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"5154b8c8-3461-45dc-b227-a58fdc2acc43", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"799331c5bb5bc1a983dda613d8de6a38476ece9cde5bbb9a61a6bf378331d59c", Pod:"calico-apiserver-7d46bcb676-fkjjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddbf88a466f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.419 [INFO][5679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.419 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" iface="eth0" netns="" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.419 [INFO][5679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.419 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.439 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.439 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.439 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.448 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.448 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" HandleID="k8s-pod-network.ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--fkjjh-eth0" Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.450 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.454579 containerd[1693]: 2025-07-12 00:08:18.451 [INFO][5679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f" Jul 12 00:08:18.454579 containerd[1693]: time="2025-07-12T00:08:18.454389687Z" level=info msg="TearDown network for sandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" successfully" Jul 12 00:08:18.483927 containerd[1693]: time="2025-07-12T00:08:18.483825631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:18.484103 containerd[1693]: time="2025-07-12T00:08:18.483946791Z" level=info msg="RemovePodSandbox \"ff8d9140cb75807ba9661a6d1f301c799bd24698cab855648423b3a907c5bf5f\" returns successfully" Jul 12 00:08:18.485064 containerd[1693]: time="2025-07-12T00:08:18.484740515Z" level=info msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.517 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af5f3704-be95-48d7-b031-1bcc62a0d210", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d", Pod:"coredns-668d6bf9bc-r7rxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772958c4d7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.518 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.518 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" iface="eth0" netns="" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.518 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.518 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.538 [INFO][5707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.538 [INFO][5707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.538 [INFO][5707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.547 [WARNING][5707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.547 [INFO][5707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.549 [INFO][5707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.553481 containerd[1693]: 2025-07-12 00:08:18.552 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.554532 containerd[1693]: time="2025-07-12T00:08:18.553841373Z" level=info msg="TearDown network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" successfully" Jul 12 00:08:18.554532 containerd[1693]: time="2025-07-12T00:08:18.553877413Z" level=info msg="StopPodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" returns successfully" Jul 12 00:08:18.556403 containerd[1693]: time="2025-07-12T00:08:18.555529742Z" level=info msg="RemovePodSandbox for \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" Jul 12 00:08:18.556403 containerd[1693]: time="2025-07-12T00:08:18.555566942Z" level=info msg="Forcibly stopping sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\"" Jul 12 00:08:18.603883 kubelet[3108]: I0712 00:08:18.602578 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d46bcb676-fkjjh" podStartSLOduration=34.283723185 podStartE2EDuration="42.602563052s" podCreationTimestamp="2025-07-12 00:07:36 +0000 UTC" firstStartedPulling="2025-07-12 00:08:05.80235564 +0000 UTC m=+47.605569659" lastFinishedPulling="2025-07-12 00:08:14.121195507 +0000 UTC m=+55.924409526" observedRunningTime="2025-07-12 00:08:14.585105925 +0000 UTC m=+56.388319944" watchObservedRunningTime="2025-07-12 00:08:18.602563052 +0000 UTC m=+60.405777071" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.654 [WARNING][5721] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af5f3704-be95-48d7-b031-1bcc62a0d210", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"96812482c730223fb1c5c69481b78e012139ad3ac15fd8fea673710d5e180f3d", Pod:"coredns-668d6bf9bc-r7rxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali772958c4d7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.654 [INFO][5721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.654 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" iface="eth0" netns="" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.654 [INFO][5721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.654 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.679 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.680 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.680 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.690 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.690 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" HandleID="k8s-pod-network.4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--r7rxs-eth0" Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.691 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.695635 containerd[1693]: 2025-07-12 00:08:18.693 [INFO][5721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302" Jul 12 00:08:18.696157 containerd[1693]: time="2025-07-12T00:08:18.696115830Z" level=info msg="TearDown network for sandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" successfully" Jul 12 00:08:18.707966 containerd[1693]: time="2025-07-12T00:08:18.707816127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:18.707966 containerd[1693]: time="2025-07-12T00:08:18.707892127Z" level=info msg="RemovePodSandbox \"4ad6be5f95d5f545999eab095f6cbfad4635eabe1166c4316c7188529d469302\" returns successfully" Jul 12 00:08:18.709266 containerd[1693]: time="2025-07-12T00:08:18.708637571Z" level=info msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.752 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5", Pod:"coredns-668d6bf9bc-nbw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc05d1f703a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.752 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.752 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" iface="eth0" netns="" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.752 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.752 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.774 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.774 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.774 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.784 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.784 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.786 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.790533 containerd[1693]: 2025-07-12 00:08:18.789 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.792929 containerd[1693]: time="2025-07-12T00:08:18.790570572Z" level=info msg="TearDown network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" successfully" Jul 12 00:08:18.792929 containerd[1693]: time="2025-07-12T00:08:18.790595332Z" level=info msg="StopPodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" returns successfully" Jul 12 00:08:18.792929 containerd[1693]: time="2025-07-12T00:08:18.791248935Z" level=info msg="RemovePodSandbox for \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" Jul 12 00:08:18.792929 containerd[1693]: time="2025-07-12T00:08:18.791464736Z" level=info msg="Forcibly stopping sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\"" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.829 [WARNING][5790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8d45aa45-e8a2-4c24-b1b6-5285c8ed5896", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"e7a0d8224ac73149ad74314337685244efa5a6abb7605ba0dd29f66665cb45f5", Pod:"coredns-668d6bf9bc-nbw2t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc05d1f703a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.830 [INFO][5790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.830 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" iface="eth0" netns="" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.830 [INFO][5790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.830 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.852 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.852 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.852 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.861 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.861 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" HandleID="k8s-pod-network.9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Workload="ci--4081.3.4--n--ddca76aad7-k8s-coredns--668d6bf9bc--nbw2t-eth0" Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.864 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.867800 containerd[1693]: 2025-07-12 00:08:18.865 [INFO][5790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725" Jul 12 00:08:18.867800 containerd[1693]: time="2025-07-12T00:08:18.867677109Z" level=info msg="TearDown network for sandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" successfully" Jul 12 00:08:18.881015 containerd[1693]: time="2025-07-12T00:08:18.880798053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:18.881015 containerd[1693]: time="2025-07-12T00:08:18.880909054Z" level=info msg="RemovePodSandbox \"9669f98a14ee98b7d96f241417143835d0b70744a7f1b920961435b0cffd6725\" returns successfully" Jul 12 00:08:18.881719 containerd[1693]: time="2025-07-12T00:08:18.881342616Z" level=info msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.926 [WARNING][5813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"058503a3-83aa-47a4-b834-2e39d5989b2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598", Pod:"csi-node-driver-9q7pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33085f2df4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.926 [INFO][5813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.926 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" iface="eth0" netns="" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.926 [INFO][5813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.926 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.945 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.945 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.945 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.953 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.953 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.955 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:18.958500 containerd[1693]: 2025-07-12 00:08:18.956 [INFO][5813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:18.959083 containerd[1693]: time="2025-07-12T00:08:18.958529274Z" level=info msg="TearDown network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" successfully" Jul 12 00:08:18.959083 containerd[1693]: time="2025-07-12T00:08:18.958561674Z" level=info msg="StopPodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" returns successfully" Jul 12 00:08:18.959083 containerd[1693]: time="2025-07-12T00:08:18.959031796Z" level=info msg="RemovePodSandbox for \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" Jul 12 00:08:18.959083 containerd[1693]: time="2025-07-12T00:08:18.959059796Z" level=info msg="Forcibly stopping sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\"" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:18.994 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"058503a3-83aa-47a4-b834-2e39d5989b2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598", Pod:"csi-node-driver-9q7pd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33085f2df4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:18.994 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:18.994 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" iface="eth0" netns="" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:18.994 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:18.994 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.015 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.015 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.015 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.024 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.025 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" HandleID="k8s-pod-network.0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Workload="ci--4081.3.4--n--ddca76aad7-k8s-csi--node--driver--9q7pd-eth0" Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.026 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.030164 containerd[1693]: 2025-07-12 00:08:19.028 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291" Jul 12 00:08:19.032174 containerd[1693]: time="2025-07-12T00:08:19.030289225Z" level=info msg="TearDown network for sandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" successfully" Jul 12 00:08:19.070245 containerd[1693]: time="2025-07-12T00:08:19.070049979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:19.070245 containerd[1693]: time="2025-07-12T00:08:19.070154460Z" level=info msg="RemovePodSandbox \"0d69f421e841d28881f6396064a4a385d37060ffd4ab69426eb59450f8f24291\" returns successfully" Jul 12 00:08:19.070889 containerd[1693]: time="2025-07-12T00:08:19.070844983Z" level=info msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.111 [WARNING][5856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0", GenerateName:"calico-kube-controllers-6b8d4c7b8b-", Namespace:"calico-system", SelfLink:"", UID:"e11846b7-c405-4d2c-8bcf-444534f1feee", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8d4c7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28", Pod:"calico-kube-controllers-6b8d4c7b8b-5jx6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd88b801762", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.111 [INFO][5856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.111 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" iface="eth0" netns="" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.111 [INFO][5856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.111 [INFO][5856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.132 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.132 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.132 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.140 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.140 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.141 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.145147 containerd[1693]: 2025-07-12 00:08:19.143 [INFO][5856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.145147 containerd[1693]: time="2025-07-12T00:08:19.145118147Z" level=info msg="TearDown network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" successfully" Jul 12 00:08:19.145147 containerd[1693]: time="2025-07-12T00:08:19.145143707Z" level=info msg="StopPodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" returns successfully" Jul 12 00:08:19.146291 containerd[1693]: time="2025-07-12T00:08:19.145673830Z" level=info msg="RemovePodSandbox for \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" Jul 12 00:08:19.146291 containerd[1693]: time="2025-07-12T00:08:19.145720910Z" level=info msg="Forcibly stopping sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\"" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.178 [WARNING][5877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0", GenerateName:"calico-kube-controllers-6b8d4c7b8b-", Namespace:"calico-system", SelfLink:"", UID:"e11846b7-c405-4d2c-8bcf-444534f1feee", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8d4c7b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"a490083c9e98aa46874098c1a097852f7fbe3a0bc23ffbd9b1bef2f77e17cd28", Pod:"calico-kube-controllers-6b8d4c7b8b-5jx6w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd88b801762", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.178 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.178 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" iface="eth0" netns="" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.178 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.178 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.204 [INFO][5884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.204 [INFO][5884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.204 [INFO][5884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.212 [WARNING][5884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.212 [INFO][5884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" HandleID="k8s-pod-network.a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--kube--controllers--6b8d4c7b8b--5jx6w-eth0" Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.214 [INFO][5884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.219112 containerd[1693]: 2025-07-12 00:08:19.215 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284" Jul 12 00:08:19.219591 containerd[1693]: time="2025-07-12T00:08:19.219167869Z" level=info msg="TearDown network for sandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" successfully" Jul 12 00:08:19.237190 containerd[1693]: time="2025-07-12T00:08:19.237108957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:19.237313 containerd[1693]: time="2025-07-12T00:08:19.237235918Z" level=info msg="RemovePodSandbox \"a35ed07ccbf508ad209edc9165b95d3840c25c6472303b6cad094a876c6a2284\" returns successfully" Jul 12 00:08:19.237740 containerd[1693]: time="2025-07-12T00:08:19.237705760Z" level=info msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.282 [WARNING][5898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"44b464a5-2b46-4c57-9fe7-32dfead6264d", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892", Pod:"goldmane-768f4c5c69-8v7xh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94136ea498e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.282 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.282 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" iface="eth0" netns="" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.282 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.282 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.325 [INFO][5905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.325 [INFO][5905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.325 [INFO][5905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.333 [WARNING][5905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.333 [INFO][5905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.335 [INFO][5905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.338957 containerd[1693]: 2025-07-12 00:08:19.337 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.339657 containerd[1693]: time="2025-07-12T00:08:19.339538538Z" level=info msg="TearDown network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" successfully" Jul 12 00:08:19.339657 containerd[1693]: time="2025-07-12T00:08:19.339567578Z" level=info msg="StopPodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" returns successfully" Jul 12 00:08:19.340329 containerd[1693]: time="2025-07-12T00:08:19.340299462Z" level=info msg="RemovePodSandbox for \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" Jul 12 00:08:19.340381 containerd[1693]: time="2025-07-12T00:08:19.340368582Z" level=info msg="Forcibly stopping sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\"" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.375 [WARNING][5920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"44b464a5-2b46-4c57-9fe7-32dfead6264d", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"bee1ff502a35bc6b8634815b3a3c1a3f36cdc95168ee19719c5f44a9a360e892", Pod:"goldmane-768f4c5c69-8v7xh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94136ea498e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.376 [INFO][5920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.376 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" iface="eth0" netns="" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.376 [INFO][5920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.376 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.403 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.403 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.404 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.415 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.693 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" HandleID="k8s-pod-network.cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Workload="ci--4081.3.4--n--ddca76aad7-k8s-goldmane--768f4c5c69--8v7xh-eth0" Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.697 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:19.702489 containerd[1693]: 2025-07-12 00:08:19.699 [INFO][5920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294" Jul 12 00:08:19.702489 containerd[1693]: time="2025-07-12T00:08:19.702459095Z" level=info msg="TearDown network for sandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" successfully" Jul 12 00:08:20.066391 containerd[1693]: time="2025-07-12T00:08:20.066336924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:20.066916 containerd[1693]: time="2025-07-12T00:08:20.066412485Z" level=info msg="RemovePodSandbox \"cbf52aa48739c8c9be98806820276f035ac4f74d0013ebb4ce11eb724dd5f294\" returns successfully" Jul 12 00:08:20.067302 containerd[1693]: time="2025-07-12T00:08:20.067270611Z" level=info msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.125 [WARNING][5966] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.126 [INFO][5966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.126 [INFO][5966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" iface="eth0" netns="" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.126 [INFO][5966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.126 [INFO][5966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.151 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.151 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.151 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.162 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.162 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.165 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:20.170326 containerd[1693]: 2025-07-12 00:08:20.167 [INFO][5966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.170950 containerd[1693]: time="2025-07-12T00:08:20.170367659Z" level=info msg="TearDown network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" successfully" Jul 12 00:08:20.170950 containerd[1693]: time="2025-07-12T00:08:20.170398819Z" level=info msg="StopPodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" returns successfully" Jul 12 00:08:20.170950 containerd[1693]: time="2025-07-12T00:08:20.170811542Z" level=info msg="RemovePodSandbox for \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" Jul 12 00:08:20.170950 containerd[1693]: time="2025-07-12T00:08:20.170836662Z" level=info msg="Forcibly stopping sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\"" Jul 12 00:08:20.256449 containerd[1693]: time="2025-07-12T00:08:20.256403779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.226 [WARNING][5987] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" WorkloadEndpoint="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.226 [INFO][5987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.226 [INFO][5987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" iface="eth0" netns="" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.226 [INFO][5987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.226 [INFO][5987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.245 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.245 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.245 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.253 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.253 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" HandleID="k8s-pod-network.316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-whisker--6fd7dd5445--ztqj8-eth0" Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.254 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:20.259244 containerd[1693]: 2025-07-12 00:08:20.256 [INFO][5987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b" Jul 12 00:08:20.259244 containerd[1693]: time="2025-07-12T00:08:20.259171480Z" level=info msg="TearDown network for sandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" successfully" Jul 12 00:08:20.272760 containerd[1693]: time="2025-07-12T00:08:20.272683020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:08:20.276799 containerd[1693]: time="2025-07-12T00:08:20.276760571Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.296159 containerd[1693]: time="2025-07-12T00:08:20.296114075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:20.296310 containerd[1693]: time="2025-07-12T00:08:20.296187995Z" level=info msg="RemovePodSandbox \"316ec2ba0bc1a6968a8161f8e9ec9b822e3be257f83f4aaf7854be938c43ed9b\" returns successfully" Jul 12 00:08:20.296765 containerd[1693]: time="2025-07-12T00:08:20.296729839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.297078 containerd[1693]: time="2025-07-12T00:08:20.297052602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.3094174s" Jul 12 00:08:20.297106 containerd[1693]: time="2025-07-12T00:08:20.297082082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:08:20.298550 containerd[1693]: time="2025-07-12T00:08:20.297926248Z" level=info msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" Jul 12 00:08:20.304194 containerd[1693]: time="2025-07-12T00:08:20.304167975Z" level=info msg="CreateContainer within sandbox \"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:08:20.360087 containerd[1693]: time="2025-07-12T00:08:20.359980310Z" level=info msg="CreateContainer within sandbox \"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4fb1b438fa5ce1f13d7596256a5b3e054c44655df19e7defbf574e8ea5235c70\"" Jul 12 00:08:20.364258 containerd[1693]: time="2025-07-12T00:08:20.363338255Z" level=info msg="StartContainer for \"4fb1b438fa5ce1f13d7596256a5b3e054c44655df19e7defbf574e8ea5235c70\"" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.338 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4d993b-f232-4951-84c0-c4c4ed832470", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05", Pod:"calico-apiserver-7d46bcb676-g8p92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2c7917eb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.338 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.338 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" iface="eth0" netns="" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.338 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.338 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.366 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.367 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.367 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.376 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.376 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.378 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:20.383300 containerd[1693]: 2025-07-12 00:08:20.380 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.383731 containerd[1693]: time="2025-07-12T00:08:20.383336924Z" level=info msg="TearDown network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" successfully" Jul 12 00:08:20.383731 containerd[1693]: time="2025-07-12T00:08:20.383363684Z" level=info msg="StopPodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" returns successfully" Jul 12 00:08:20.384970 containerd[1693]: time="2025-07-12T00:08:20.384421812Z" level=info msg="RemovePodSandbox for \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" Jul 12 00:08:20.385123 containerd[1693]: time="2025-07-12T00:08:20.385097977Z" level=info msg="Forcibly stopping sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\"" Jul 12 00:08:20.410409 systemd[1]: Started cri-containerd-4fb1b438fa5ce1f13d7596256a5b3e054c44655df19e7defbf574e8ea5235c70.scope - libcontainer container 4fb1b438fa5ce1f13d7596256a5b3e054c44655df19e7defbf574e8ea5235c70. Jul 12 00:08:20.458773 containerd[1693]: time="2025-07-12T00:08:20.457632317Z" level=info msg="StartContainer for \"4fb1b438fa5ce1f13d7596256a5b3e054c44655df19e7defbf574e8ea5235c70\" returns successfully" Jul 12 00:08:20.461711 containerd[1693]: time="2025-07-12T00:08:20.461109903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.444 [WARNING][6043] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0", GenerateName:"calico-apiserver-7d46bcb676-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f4d993b-f232-4951-84c0-c4c4ed832470", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d46bcb676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-ddca76aad7", ContainerID:"f365f180bb6f7e0b4817970d0fdb70e8885d952f5a676332ac761c4d9d361e05", Pod:"calico-apiserver-7d46bcb676-g8p92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2c7917eb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.444 [INFO][6043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.444 [INFO][6043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" iface="eth0" netns="" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.444 [INFO][6043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.444 [INFO][6043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.471 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.471 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.471 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.479 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.479 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" HandleID="k8s-pod-network.a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Workload="ci--4081.3.4--n--ddca76aad7-k8s-calico--apiserver--7d46bcb676--g8p92-eth0" Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.481 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:20.484482 containerd[1693]: 2025-07-12 00:08:20.482 [INFO][6043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b" Jul 12 00:08:20.485161 containerd[1693]: time="2025-07-12T00:08:20.484840000Z" level=info msg="TearDown network for sandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" successfully" Jul 12 00:08:20.500629 containerd[1693]: time="2025-07-12T00:08:20.500444556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:20.500629 containerd[1693]: time="2025-07-12T00:08:20.500519796Z" level=info msg="RemovePodSandbox \"a7aafe8475bb6e98f2df8d1b498955726d8c6ed627d4a4b4b720eb86b2b4ec9b\" returns successfully" Jul 12 00:08:21.876011 containerd[1693]: time="2025-07-12T00:08:21.875960156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.884575 containerd[1693]: time="2025-07-12T00:08:21.884371699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:08:21.893240 containerd[1693]: time="2025-07-12T00:08:21.892976003Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.899410 containerd[1693]: time="2025-07-12T00:08:21.899372851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:21.900526 containerd[1693]: time="2025-07-12T00:08:21.899998335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.438849712s" Jul 12 00:08:21.900526 containerd[1693]: time="2025-07-12T00:08:21.900036456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:08:21.902084 containerd[1693]: time="2025-07-12T00:08:21.902049391Z" level=info msg="CreateContainer within sandbox \"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:08:21.960405 containerd[1693]: time="2025-07-12T00:08:21.960356825Z" level=info msg="CreateContainer within sandbox \"6768e0cd2dc443ce6e7b810fb93f6dc5f359c1366e4deb75fbeee37330046598\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a\"" Jul 12 00:08:21.961316 containerd[1693]: time="2025-07-12T00:08:21.961239871Z" level=info msg="StartContainer for \"c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a\"" Jul 12 00:08:21.995325 systemd[1]: run-containerd-runc-k8s.io-c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a-runc.EBDgG7.mount: Deactivated successfully. Jul 12 00:08:22.003407 systemd[1]: Started cri-containerd-c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a.scope - libcontainer container c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a. Jul 12 00:08:22.036467 containerd[1693]: time="2025-07-12T00:08:22.036349270Z" level=info msg="StartContainer for \"c3e5866a5a716031524e8cc35587844f627ee226c3e6719e9f50919060803c0a\" returns successfully" Jul 12 00:08:22.434968 kubelet[3108]: I0712 00:08:22.434928 3108 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:08:22.434968 kubelet[3108]: I0712 00:08:22.434978 3108 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:08:22.626919 kubelet[3108]: I0712 00:08:22.626301 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8v7xh" podStartSLOduration=33.020894256 podStartE2EDuration="42.626284822s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:08:08.381831874 +0000 UTC m=+50.185045893" lastFinishedPulling="2025-07-12 00:08:17.98722248 +0000 UTC m=+59.790436459" observedRunningTime="2025-07-12 00:08:18.607001953 +0000 UTC m=+60.410215972" watchObservedRunningTime="2025-07-12 00:08:22.626284822 +0000 UTC m=+64.429498841" Jul 12 00:08:30.591215 kubelet[3108]: I0712 00:08:30.589552 3108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9q7pd" podStartSLOduration=36.100343753 podStartE2EDuration="49.589534317s" podCreationTimestamp="2025-07-12 00:07:41 +0000 UTC" firstStartedPulling="2025-07-12 00:08:08.411458856 +0000 UTC m=+50.214672875" lastFinishedPulling="2025-07-12 00:08:21.90064942 +0000 UTC m=+63.703863439" observedRunningTime="2025-07-12 00:08:22.628484439 +0000 UTC m=+64.431698498" watchObservedRunningTime="2025-07-12 00:08:30.589534317 +0000 UTC m=+72.392748336" Jul 12 00:08:43.303654 systemd[1]: run-containerd-runc-k8s.io-65f3b93d27b94b5476d38287bdc084fa9a3943a5835de6747aace1fbb66450d4-runc.snZF10.mount: Deactivated successfully. Jul 12 00:08:44.235974 kubelet[3108]: I0712 00:08:44.235202 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:50.276527 kubelet[3108]: I0712 00:08:50.275226 3108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:06.519525 systemd[1]: Started sshd@7-10.200.20.43:22-10.200.16.10:56112.service - OpenSSH per-connection server daemon (10.200.16.10:56112). Jul 12 00:09:06.943030 systemd[1]: run-containerd-runc-k8s.io-a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1-runc.MX9oYt.mount: Deactivated successfully. Jul 12 00:09:06.980371 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 56112 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:06.983342 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:06.989852 systemd-logind[1659]: New session 10 of user core. Jul 12 00:09:06.994683 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:09:07.420619 sshd[6249]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:07.427440 systemd-logind[1659]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:09:07.428451 systemd[1]: sshd@7-10.200.20.43:22-10.200.16.10:56112.service: Deactivated successfully. Jul 12 00:09:07.432936 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:09:07.436794 systemd-logind[1659]: Removed session 10. Jul 12 00:09:12.507921 systemd[1]: Started sshd@8-10.200.20.43:22-10.200.16.10:52680.service - OpenSSH per-connection server daemon (10.200.16.10:52680). Jul 12 00:09:12.962914 sshd[6286]: Accepted publickey for core from 10.200.16.10 port 52680 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:12.964684 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:12.969519 systemd-logind[1659]: New session 11 of user core. Jul 12 00:09:12.973369 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:09:13.398801 sshd[6286]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:13.405669 systemd[1]: sshd@8-10.200.20.43:22-10.200.16.10:52680.service: Deactivated successfully. Jul 12 00:09:13.412631 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:09:13.414185 systemd-logind[1659]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:09:13.416150 systemd-logind[1659]: Removed session 11. Jul 12 00:09:18.486019 systemd[1]: Started sshd@9-10.200.20.43:22-10.200.16.10:52688.service - OpenSSH per-connection server daemon (10.200.16.10:52688). Jul 12 00:09:18.981545 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 52688 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:18.982877 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:18.987336 systemd-logind[1659]: New session 12 of user core. Jul 12 00:09:18.992382 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:09:19.400906 sshd[6322]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:19.404051 systemd[1]: sshd@9-10.200.20.43:22-10.200.16.10:52688.service: Deactivated successfully. Jul 12 00:09:19.406158 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:09:19.407644 systemd-logind[1659]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:09:19.409303 systemd-logind[1659]: Removed session 12. Jul 12 00:09:19.485804 systemd[1]: Started sshd@10-10.200.20.43:22-10.200.16.10:52692.service - OpenSSH per-connection server daemon (10.200.16.10:52692). Jul 12 00:09:19.933716 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 52692 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:19.934277 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:19.938760 systemd-logind[1659]: New session 13 of user core. Jul 12 00:09:19.942429 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:09:20.364274 sshd[6336]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:20.367735 systemd[1]: sshd@10-10.200.20.43:22-10.200.16.10:52692.service: Deactivated successfully. Jul 12 00:09:20.370282 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:09:20.371190 systemd-logind[1659]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:09:20.373140 systemd-logind[1659]: Removed session 13. Jul 12 00:09:20.443602 systemd[1]: Started sshd@11-10.200.20.43:22-10.200.16.10:40172.service - OpenSSH per-connection server daemon (10.200.16.10:40172). Jul 12 00:09:20.872718 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 40172 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:20.874270 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:20.878688 systemd-logind[1659]: New session 14 of user core. Jul 12 00:09:20.883397 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:09:21.266919 sshd[6367]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:21.269629 systemd[1]: sshd@11-10.200.20.43:22-10.200.16.10:40172.service: Deactivated successfully. Jul 12 00:09:21.272701 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:09:21.274787 systemd-logind[1659]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:09:21.275880 systemd-logind[1659]: Removed session 14. Jul 12 00:09:26.348887 systemd[1]: Started sshd@12-10.200.20.43:22-10.200.16.10:40180.service - OpenSSH per-connection server daemon (10.200.16.10:40180). Jul 12 00:09:26.781726 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 40180 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:26.783782 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:26.790416 systemd-logind[1659]: New session 15 of user core. Jul 12 00:09:26.799557 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:09:27.182458 sshd[6392]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:27.187499 systemd[1]: sshd@12-10.200.20.43:22-10.200.16.10:40180.service: Deactivated successfully. Jul 12 00:09:27.190470 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:09:27.191554 systemd-logind[1659]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:09:27.194067 systemd-logind[1659]: Removed session 15. Jul 12 00:09:32.274507 systemd[1]: Started sshd@13-10.200.20.43:22-10.200.16.10:38526.service - OpenSSH per-connection server daemon (10.200.16.10:38526). Jul 12 00:09:32.719934 sshd[6428]: Accepted publickey for core from 10.200.16.10 port 38526 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:32.721314 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:32.726087 systemd-logind[1659]: New session 16 of user core. Jul 12 00:09:32.733548 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:09:33.139808 sshd[6428]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:33.143501 systemd[1]: sshd@13-10.200.20.43:22-10.200.16.10:38526.service: Deactivated successfully. Jul 12 00:09:33.146673 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:09:33.147673 systemd-logind[1659]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:09:33.149435 systemd-logind[1659]: Removed session 16. Jul 12 00:09:38.229793 systemd[1]: Started sshd@14-10.200.20.43:22-10.200.16.10:38540.service - OpenSSH per-connection server daemon (10.200.16.10:38540). Jul 12 00:09:38.686737 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 38540 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:38.688138 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:38.692096 systemd-logind[1659]: New session 17 of user core. Jul 12 00:09:38.704380 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:09:39.143257 sshd[6463]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:39.147047 systemd[1]: sshd@14-10.200.20.43:22-10.200.16.10:38540.service: Deactivated successfully. Jul 12 00:09:39.150561 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:09:39.151892 systemd-logind[1659]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:09:39.154095 systemd-logind[1659]: Removed session 17. Jul 12 00:09:44.225499 systemd[1]: Started sshd@15-10.200.20.43:22-10.200.16.10:44832.service - OpenSSH per-connection server daemon (10.200.16.10:44832). Jul 12 00:09:44.676989 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 44832 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:44.679145 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:44.688288 systemd-logind[1659]: New session 18 of user core. Jul 12 00:09:44.689613 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:09:45.096454 sshd[6517]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:45.100571 systemd[1]: sshd@15-10.200.20.43:22-10.200.16.10:44832.service: Deactivated successfully. Jul 12 00:09:45.104117 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:09:45.105099 systemd-logind[1659]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:09:45.106171 systemd-logind[1659]: Removed session 18. Jul 12 00:09:45.173483 systemd[1]: Started sshd@16-10.200.20.43:22-10.200.16.10:44846.service - OpenSSH per-connection server daemon (10.200.16.10:44846). Jul 12 00:09:45.604700 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 44846 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:45.607004 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:45.614969 systemd-logind[1659]: New session 19 of user core. Jul 12 00:09:45.619450 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:09:46.213528 sshd[6530]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:46.217366 systemd-logind[1659]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:09:46.217552 systemd[1]: sshd@16-10.200.20.43:22-10.200.16.10:44846.service: Deactivated successfully. Jul 12 00:09:46.221351 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:09:46.224169 systemd-logind[1659]: Removed session 19. Jul 12 00:09:46.309600 systemd[1]: Started sshd@17-10.200.20.43:22-10.200.16.10:44854.service - OpenSSH per-connection server daemon (10.200.16.10:44854). Jul 12 00:09:46.804606 sshd[6541]: Accepted publickey for core from 10.200.16.10 port 44854 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:46.807395 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:46.814369 systemd-logind[1659]: New session 20 of user core. Jul 12 00:09:46.820420 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:09:48.037570 sshd[6541]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:48.041649 systemd[1]: sshd@17-10.200.20.43:22-10.200.16.10:44854.service: Deactivated successfully. Jul 12 00:09:48.043933 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:09:48.045241 systemd-logind[1659]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:09:48.046125 systemd-logind[1659]: Removed session 20. Jul 12 00:09:48.114272 systemd[1]: Started sshd@18-10.200.20.43:22-10.200.16.10:44866.service - OpenSSH per-connection server daemon (10.200.16.10:44866). Jul 12 00:09:48.544027 sshd[6559]: Accepted publickey for core from 10.200.16.10 port 44866 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:48.545718 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:48.549610 systemd-logind[1659]: New session 21 of user core. Jul 12 00:09:48.559581 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:09:49.065827 sshd[6559]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:49.070981 systemd[1]: sshd@18-10.200.20.43:22-10.200.16.10:44866.service: Deactivated successfully. Jul 12 00:09:49.073622 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:09:49.074593 systemd-logind[1659]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:09:49.075535 systemd-logind[1659]: Removed session 21. Jul 12 00:09:49.164475 systemd[1]: Started sshd@19-10.200.20.43:22-10.200.16.10:44870.service - OpenSSH per-connection server daemon (10.200.16.10:44870). Jul 12 00:09:49.635473 sshd[6570]: Accepted publickey for core from 10.200.16.10 port 44870 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:49.636587 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:49.643597 systemd-logind[1659]: New session 22 of user core. Jul 12 00:09:49.650397 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:09:50.037491 sshd[6570]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:50.041386 systemd[1]: sshd@19-10.200.20.43:22-10.200.16.10:44870.service: Deactivated successfully. Jul 12 00:09:50.043119 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:09:50.043907 systemd-logind[1659]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:09:50.044973 systemd-logind[1659]: Removed session 22. Jul 12 00:09:55.116501 systemd[1]: Started sshd@20-10.200.20.43:22-10.200.16.10:39284.service - OpenSSH per-connection server daemon (10.200.16.10:39284). Jul 12 00:09:55.539463 sshd[6606]: Accepted publickey for core from 10.200.16.10 port 39284 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:55.540744 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:55.545289 systemd-logind[1659]: New session 23 of user core. Jul 12 00:09:55.550378 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:09:55.929468 sshd[6606]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:55.933164 systemd-logind[1659]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:09:55.933829 systemd[1]: sshd@20-10.200.20.43:22-10.200.16.10:39284.service: Deactivated successfully. Jul 12 00:09:55.936044 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:09:55.937484 systemd-logind[1659]: Removed session 23. Jul 12 00:10:01.016662 systemd[1]: Started sshd@21-10.200.20.43:22-10.200.16.10:60552.service - OpenSSH per-connection server daemon (10.200.16.10:60552). Jul 12 00:10:01.486165 sshd[6642]: Accepted publickey for core from 10.200.16.10 port 60552 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:01.487844 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:01.491816 systemd-logind[1659]: New session 24 of user core. Jul 12 00:10:01.499414 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:10:01.897350 sshd[6642]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:01.901353 systemd-logind[1659]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:10:01.903582 systemd[1]: sshd@21-10.200.20.43:22-10.200.16.10:60552.service: Deactivated successfully. Jul 12 00:10:01.906177 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:10:01.908078 systemd-logind[1659]: Removed session 24. Jul 12 00:10:06.978082 systemd[1]: Started sshd@22-10.200.20.43:22-10.200.16.10:60568.service - OpenSSH per-connection server daemon (10.200.16.10:60568). Jul 12 00:10:07.445720 sshd[6674]: Accepted publickey for core from 10.200.16.10 port 60568 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:07.447111 sshd[6674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:07.451176 systemd-logind[1659]: New session 25 of user core. Jul 12 00:10:07.457383 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:10:07.850326 sshd[6674]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:07.853969 systemd-logind[1659]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:10:07.854773 systemd[1]: sshd@22-10.200.20.43:22-10.200.16.10:60568.service: Deactivated successfully. Jul 12 00:10:07.857045 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:10:07.859956 systemd-logind[1659]: Removed session 25. Jul 12 00:10:12.947345 systemd[1]: Started sshd@23-10.200.20.43:22-10.200.16.10:49796.service - OpenSSH per-connection server daemon (10.200.16.10:49796). Jul 12 00:10:13.439759 sshd[6688]: Accepted publickey for core from 10.200.16.10 port 49796 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:13.442188 sshd[6688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:13.447251 systemd-logind[1659]: New session 26 of user core. Jul 12 00:10:13.453384 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:10:13.599062 systemd[1]: run-containerd-runc-k8s.io-a15d12eef08e1c65c2e16e6f2005f99323fc04b42fd2d6e67d8c58699d780da1-runc.0HqQTb.mount: Deactivated successfully. Jul 12 00:10:13.875454 sshd[6688]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:13.878581 systemd[1]: sshd@23-10.200.20.43:22-10.200.16.10:49796.service: Deactivated successfully. Jul 12 00:10:13.881937 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:10:13.886350 systemd-logind[1659]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:10:13.887713 systemd-logind[1659]: Removed session 26. Jul 12 00:10:18.952418 systemd[1]: Started sshd@24-10.200.20.43:22-10.200.16.10:49806.service - OpenSSH per-connection server daemon (10.200.16.10:49806). Jul 12 00:10:19.383295 sshd[6722]: Accepted publickey for core from 10.200.16.10 port 49806 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:19.385458 sshd[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:19.390961 systemd-logind[1659]: New session 27 of user core. Jul 12 00:10:19.396442 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:10:19.788695 sshd[6722]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:19.791984 systemd-logind[1659]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:10:19.792964 systemd[1]: sshd@24-10.200.20.43:22-10.200.16.10:49806.service: Deactivated successfully. Jul 12 00:10:19.794984 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:10:19.796484 systemd-logind[1659]: Removed session 27. Jul 12 00:10:24.874480 systemd[1]: Started sshd@25-10.200.20.43:22-10.200.16.10:39500.service - OpenSSH per-connection server daemon (10.200.16.10:39500). Jul 12 00:10:25.296720 sshd[6758]: Accepted publickey for core from 10.200.16.10 port 39500 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:25.298042 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:25.302427 systemd-logind[1659]: New session 28 of user core. Jul 12 00:10:25.308365 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:10:25.691832 sshd[6758]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:25.694776 systemd-logind[1659]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:10:25.695471 systemd[1]: sshd@25-10.200.20.43:22-10.200.16.10:39500.service: Deactivated successfully. Jul 12 00:10:25.698403 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:10:25.701006 systemd-logind[1659]: Removed session 28.