Apr 30 00:34:29.295102 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:34:29.295124 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:34:29.295132 kernel: KASLR enabled Apr 30 00:34:29.295138 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:34:29.295145 kernel: printk: bootconsole [pl11] enabled Apr 30 00:34:29.295151 kernel: efi: EFI v2.7 by EDK II Apr 30 00:34:29.295158 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:34:29.295164 kernel: random: crng init done Apr 30 00:34:29.295170 kernel: ACPI: Early table checksum verification disabled Apr 30 00:34:29.295175 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:34:29.295181 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295187 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295195 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:34:29.295201 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295208 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295214 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295221 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295228 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295235 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295241 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:34:29.295247 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295254 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:34:29.295260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:34:29.295266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:34:29.295272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:34:29.295278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:34:29.295285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:34:29.295291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:34:29.295299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:34:29.295305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:34:29.295311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:34:29.295318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:34:29.295324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:34:29.295330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:34:29.295336 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Apr 30 00:34:29.295342 kernel: Zone ranges: Apr 30 00:34:29.295348 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:34:29.295355 kernel: DMA32 empty Apr 30 00:34:29.295361 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:34:29.295367 kernel: Movable zone start for each node Apr 30 00:34:29.295377 kernel: Early memory node ranges Apr 30 00:34:29.295384 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:34:29.295391 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:34:29.295397 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:34:29.295404 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:34:29.295412 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:34:29.295419 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:34:29.295425 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:34:29.295432 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:34:29.295439 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:34:29.295445 kernel: psci: probing for conduit method from ACPI. Apr 30 00:34:29.295452 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:34:29.295459 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:34:29.295465 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:34:29.295472 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:34:29.295479 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:34:29.295485 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:34:29.295493 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:34:29.295500 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:34:29.295507 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:34:29.295513 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:34:29.295520 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:34:29.295527 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:34:29.295533 kernel: CPU features: detected: Spectre-BHB Apr 30 00:34:29.295540 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:34:29.295547 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:34:29.295553 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:34:29.295560 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:34:29.295568 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:34:29.295574 kernel: alternatives: applying boot alternatives Apr 30 00:34:29.295582 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:34:29.295590 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:34:29.295596 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:34:29.295603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:34:29.295610 kernel: Fallback order for Node 0: 0 Apr 30 00:34:29.295616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:34:29.295623 kernel: Policy zone: Normal Apr 30 00:34:29.295629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:34:29.295636 kernel: software IO TLB: area num 2. Apr 30 00:34:29.295644 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:34:29.295651 kernel: Memory: 3982688K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211472K reserved, 0K cma-reserved) Apr 30 00:34:29.295658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:34:29.295665 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:34:29.295672 kernel: rcu: RCU event tracing is enabled. Apr 30 00:34:29.297707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:34:29.297736 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:34:29.297744 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:34:29.297751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:34:29.297758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:34:29.297765 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:34:29.297777 kernel: GICv3: 960 SPIs implemented Apr 30 00:34:29.297784 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:34:29.297791 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:34:29.297797 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:34:29.297804 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:34:29.297811 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:34:29.297818 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:34:29.297825 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:34:29.297832 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:34:29.297839 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:34:29.297846 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:34:29.297855 kernel: Console: colour dummy device 80x25 Apr 30 00:34:29.297862 kernel: printk: console [tty1] enabled Apr 30 00:34:29.297869 kernel: ACPI: Core revision 20230628 Apr 30 00:34:29.297876 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:34:29.297883 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:34:29.297890 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:34:29.297897 kernel: landlock: Up and running. Apr 30 00:34:29.297904 kernel: SELinux: Initializing. Apr 30 00:34:29.297911 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.297919 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.297927 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:34:29.297934 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:34:29.297941 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:34:29.297948 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:34:29.297955 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:34:29.297962 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:34:29.297970 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:34:29.297983 kernel: Remapping and enabling EFI services. Apr 30 00:34:29.297991 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:34:29.297998 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:34:29.298005 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:34:29.298014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:34:29.298021 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:34:29.298028 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:34:29.298036 kernel: SMP: Total of 2 processors activated. Apr 30 00:34:29.298043 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:34:29.298052 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:34:29.298059 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:34:29.298067 kernel: CPU features: detected: CRC32 instructions Apr 30 00:34:29.298074 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:34:29.298081 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:34:29.298089 kernel: CPU features: detected: Privileged Access Never Apr 30 00:34:29.298096 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:34:29.298103 kernel: alternatives: applying system-wide alternatives Apr 30 00:34:29.298110 kernel: devtmpfs: initialized Apr 30 00:34:29.298120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:34:29.298127 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:34:29.298135 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:34:29.298142 kernel: SMBIOS 3.1.0 present. Apr 30 00:34:29.298149 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:34:29.298157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:34:29.298164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:34:29.298172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:34:29.298179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:34:29.298188 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:34:29.298195 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:34:29.298203 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:34:29.298210 kernel: cpuidle: using governor menu Apr 30 00:34:29.298217 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:34:29.298225 kernel: ASID allocator initialised with 32768 entries Apr 30 00:34:29.298232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:34:29.298239 kernel: Serial: AMBA PL011 UART driver Apr 30 00:34:29.298247 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:34:29.298256 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:34:29.298263 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:34:29.298270 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:34:29.298278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:34:29.298285 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:34:29.298292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:34:29.298299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:34:29.298307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:34:29.298314 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:34:29.298324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:34:29.298331 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:34:29.298338 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:34:29.298345 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:34:29.298353 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:34:29.298360 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:34:29.298368 kernel: ACPI: Interpreter enabled Apr 30 00:34:29.298375 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:34:29.298382 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:34:29.298391 kernel: printk: console [ttyAMA0] enabled Apr 30 00:34:29.298399 kernel: printk: bootconsole [pl11] disabled Apr 30 00:34:29.298406 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:34:29.298414 kernel: iommu: Default domain type: Translated Apr 30 00:34:29.298421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:34:29.298428 kernel: efivars: Registered efivars operations Apr 30 00:34:29.298435 kernel: vgaarb: loaded Apr 30 00:34:29.298443 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:34:29.298450 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:34:29.298461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:34:29.298468 kernel: pnp: PnP ACPI init Apr 30 00:34:29.298475 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:34:29.298482 kernel: NET: Registered PF_INET protocol family Apr 30 00:34:29.298490 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:34:29.298497 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:34:29.298505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:34:29.298512 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:34:29.298519 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:34:29.298529 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:34:29.298536 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.298543 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.298551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:34:29.298558 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:34:29.298565 kernel: kvm [1]: HYP mode not available Apr 30 00:34:29.298572 kernel: Initialise system trusted keyrings Apr 30 00:34:29.298580 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:34:29.298587 kernel: Key type asymmetric registered Apr 30 00:34:29.298596 kernel: Asymmetric key parser 'x509' registered Apr 30 00:34:29.298603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:34:29.298611 kernel: io scheduler mq-deadline registered Apr 30 00:34:29.298618 kernel: io scheduler kyber registered Apr 30 00:34:29.298625 kernel: io scheduler bfq registered Apr 30 00:34:29.298633 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:34:29.298640 kernel: thunder_xcv, ver 1.0 Apr 30 00:34:29.298647 kernel: thunder_bgx, ver 1.0 Apr 30 00:34:29.298654 kernel: nicpf, ver 1.0 Apr 30 00:34:29.298662 kernel: nicvf, ver 1.0 Apr 30 00:34:29.298824 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:34:29.298902 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:34:28 UTC (1745973268) Apr 30 00:34:29.298913 kernel: efifb: probing for efifb Apr 30 00:34:29.298920 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:34:29.298928 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:34:29.298935 kernel: efifb: scrolling: redraw Apr 30 00:34:29.298943 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:34:29.298953 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:34:29.298960 kernel: fb0: EFI VGA frame buffer device Apr 30 00:34:29.298967 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:34:29.298975 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:34:29.298982 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:34:29.298989 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:34:29.298997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:34:29.299004 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:34:29.299012 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:34:29.299020 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:34:29.299028 kernel: Segment Routing with IPv6 Apr 30 00:34:29.299035 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:34:29.299042 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:34:29.299050 kernel: Key type dns_resolver registered Apr 30 00:34:29.299057 kernel: registered taskstats version 1 Apr 30 00:34:29.299065 kernel: Loading compiled-in X.509 certificates Apr 30 00:34:29.299072 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:34:29.299080 kernel: Key type .fscrypt registered Apr 30 00:34:29.299089 kernel: Key type fscrypt-provisioning registered Apr 30 00:34:29.299096 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:34:29.299103 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:34:29.299111 kernel: ima: No architecture policies found Apr 30 00:34:29.299118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:34:29.299125 kernel: clk: Disabling unused clocks Apr 30 00:34:29.299133 kernel: Freeing unused kernel memory: 39424K Apr 30 00:34:29.299140 kernel: Run /init as init process Apr 30 00:34:29.299147 kernel: with arguments: Apr 30 00:34:29.299157 kernel: /init Apr 30 00:34:29.299164 kernel: with environment: Apr 30 00:34:29.299171 kernel: HOME=/ Apr 30 00:34:29.299178 kernel: TERM=linux Apr 30 00:34:29.299185 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:34:29.299194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:34:29.299204 systemd[1]: Detected virtualization microsoft. Apr 30 00:34:29.299212 systemd[1]: Detected architecture arm64. Apr 30 00:34:29.299221 systemd[1]: Running in initrd. Apr 30 00:34:29.299229 systemd[1]: No hostname configured, using default hostname. Apr 30 00:34:29.299237 systemd[1]: Hostname set to . Apr 30 00:34:29.299245 systemd[1]: Initializing machine ID from random generator. Apr 30 00:34:29.299253 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:34:29.299261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:34:29.299269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:34:29.299277 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:34:29.299287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:34:29.299295 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:34:29.299303 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:34:29.299313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:34:29.299321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:34:29.299329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:34:29.299337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:34:29.299346 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:34:29.299354 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:34:29.299362 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:34:29.299370 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:34:29.299377 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:34:29.299385 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:34:29.299393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:34:29.299401 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:34:29.299410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:34:29.299418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:34:29.299426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:34:29.299434 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:34:29.299442 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:34:29.299450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:34:29.299457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:34:29.299466 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:34:29.299474 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:34:29.299483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:34:29.299508 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:34:29.299528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:29.299537 systemd-journald[217]: Journal started Apr 30 00:34:29.299558 systemd-journald[217]: Runtime Journal (/run/log/journal/8a8310a0b78d4325bd847a0acc3ef377) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:34:29.309129 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:34:29.339390 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:34:29.339414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:34:29.347552 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:34:29.352797 kernel: Bridge firewalling registered Apr 30 00:34:29.348487 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:34:29.358476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:34:29.369875 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:34:29.380439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:34:29.391887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:29.408995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:29.417842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:34:29.435699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:34:29.451850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:34:29.460566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:29.475710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:34:29.487944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:34:29.517121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:34:29.525846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:34:29.539704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:34:29.549750 dracut-cmdline[249]: dracut-dracut-053 Apr 30 00:34:29.557174 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:34:29.566525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:34:29.633887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:34:29.672790 systemd-resolved[271]: Positive Trust Anchors: Apr 30 00:34:29.672805 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:34:29.672836 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:34:29.679341 systemd-resolved[271]: Defaulting to hostname 'linux'. Apr 30 00:34:29.749971 kernel: SCSI subsystem initialized Apr 30 00:34:29.680249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:34:29.694709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:34:29.765437 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:34:29.772700 kernel: iscsi: registered transport (tcp) Apr 30 00:34:29.790879 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:34:29.790940 kernel: QLogic iSCSI HBA Driver Apr 30 00:34:29.828103 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:34:29.841895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:34:29.868157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:34:29.868196 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:34:29.874502 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:34:29.923712 kernel: raid6: neonx8 gen() 15755 MB/s Apr 30 00:34:29.943698 kernel: raid6: neonx4 gen() 15660 MB/s Apr 30 00:34:29.963696 kernel: raid6: neonx2 gen() 13231 MB/s Apr 30 00:34:29.984696 kernel: raid6: neonx1 gen() 10483 MB/s Apr 30 00:34:30.004695 kernel: raid6: int64x8 gen() 6953 MB/s Apr 30 00:34:30.024691 kernel: raid6: int64x4 gen() 7350 MB/s Apr 30 00:34:30.045697 kernel: raid6: int64x2 gen() 6131 MB/s Apr 30 00:34:30.069018 kernel: raid6: int64x1 gen() 5059 MB/s Apr 30 00:34:30.069039 kernel: raid6: using algorithm neonx8 gen() 15755 MB/s Apr 30 00:34:30.093741 kernel: raid6: .... xor() 11934 MB/s, rmw enabled Apr 30 00:34:30.093760 kernel: raid6: using neon recovery algorithm Apr 30 00:34:30.106226 kernel: xor: measuring software checksum speed Apr 30 00:34:30.106258 kernel: 8regs : 19759 MB/sec Apr 30 00:34:30.109710 kernel: 32regs : 19631 MB/sec Apr 30 00:34:30.113067 kernel: arm64_neon : 27007 MB/sec Apr 30 00:34:30.117103 kernel: xor: using function: arm64_neon (27007 MB/sec) Apr 30 00:34:30.168717 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:34:30.179351 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:34:30.194854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:34:30.218654 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 30 00:34:30.224371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:34:30.245999 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:34:30.263402 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 30 00:34:30.295496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:34:30.309925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:34:30.349447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:34:30.368940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:34:30.397403 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:34:30.407192 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:34:30.425272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:34:30.456655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:34:30.475743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:34:30.493939 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:34:30.487875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:34:30.509413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:34:30.509588 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:30.581846 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:34:30.581867 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:34:30.581887 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:34:30.581901 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:34:30.581910 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:34:30.581919 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 00:34:30.581928 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 00:34:30.581937 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:34:30.582076 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:34:30.582086 kernel: PTP clock support registered Apr 30 00:34:30.587857 kernel: scsi host1: storvsc_host_t Apr 30 00:34:30.587915 kernel: scsi host0: storvsc_host_t Apr 30 00:34:30.591310 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:30.604849 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:34:30.617700 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:34:30.616711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:30.617026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:30.624111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:30.654977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:30.686226 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:34:30.686251 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:34:30.678784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:30.708238 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:34:31.179595 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: VF slot 1 added Apr 30 00:34:31.179736 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:34:31.179747 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:34:31.179757 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:34:31.179766 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:34:31.179782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:34:31.179792 kernel: hv_pci 2a2a754e-76f5-450f-bf6e-4b19cf13862e: PCI VMBus probing: Using version 0x10004 Apr 30 00:34:31.290399 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:34:31.290528 kernel: hv_pci 2a2a754e-76f5-450f-bf6e-4b19cf13862e: PCI host bridge to bus 76f5:00 Apr 30 00:34:31.290614 kernel: pci_bus 76f5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:34:31.290710 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:34:31.290806 kernel: pci_bus 76f5:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:34:31.290882 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:34:31.290966 kernel: pci 76f5:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:34:31.291061 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:34:31.291145 kernel: pci 76f5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:34:31.291228 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:34:31.291352 kernel: pci 76f5:00:02.0: enabling Extended Tags Apr 30 00:34:31.291445 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:34:31.291532 kernel: pci 76f5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 76f5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:34:31.291618 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:31.291630 kernel: pci_bus 76f5:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:34:31.291709 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:34:31.291792 kernel: pci 76f5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:34:30.678881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:30.712946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:31.161699 systemd-resolved[271]: Clock change detected. Flushing caches. Apr 30 00:34:31.239374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:31.281468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:31.326359 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:31.352124 kernel: mlx5_core 76f5:00:02.0: enabling device (0000 -> 0002) Apr 30 00:34:31.645603 kernel: mlx5_core 76f5:00:02.0: firmware version: 16.31.2424 Apr 30 00:34:31.645738 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: VF registering: eth1 Apr 30 00:34:31.645835 kernel: mlx5_core 76f5:00:02.0 eth1: joined to eth0 Apr 30 00:34:31.645936 kernel: mlx5_core 76f5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:34:31.653292 kernel: mlx5_core 76f5:00:02.0 enP30453s1: renamed from eth1 Apr 30 00:34:31.816666 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:34:31.926288 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Apr 30 00:34:31.939790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:34:31.960512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:34:31.995002 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Apr 30 00:34:32.007249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:34:32.020287 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:34:32.041235 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:34:32.071308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:32.078291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:33.082243 disk-uuid[603]: The operation has completed successfully. Apr 30 00:34:33.090372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:33.136618 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:34:33.138297 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:34:33.169399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:34:33.181247 sh[716]: Success Apr 30 00:34:33.211496 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:34:33.397185 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:34:33.405288 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:34:33.426550 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:34:33.457049 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:34:33.457105 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:33.463676 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:34:33.468438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:34:33.472819 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:34:33.829493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:34:33.834542 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:34:33.854462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:34:33.862441 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:34:33.896861 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:33.896908 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:33.901068 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:33.923119 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:33.936377 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:34:33.941451 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:33.947835 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:34:33.954541 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:34:33.978540 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:34:33.991758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:34:34.027370 systemd-networkd[900]: lo: Link UP Apr 30 00:34:34.027379 systemd-networkd[900]: lo: Gained carrier Apr 30 00:34:34.028886 systemd-networkd[900]: Enumeration completed Apr 30 00:34:34.030809 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:34:34.031042 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:34.031045 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:34:34.036918 systemd[1]: Reached target network.target - Network. Apr 30 00:34:34.129285 kernel: mlx5_core 76f5:00:02.0 enP30453s1: Link up Apr 30 00:34:34.207310 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: Data path switched to VF: enP30453s1 Apr 30 00:34:34.207081 systemd-networkd[900]: enP30453s1: Link UP Apr 30 00:34:34.207164 systemd-networkd[900]: eth0: Link UP Apr 30 00:34:34.207287 systemd-networkd[900]: eth0: Gained carrier Apr 30 00:34:34.207296 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:34.230510 systemd-networkd[900]: enP30453s1: Gained carrier Apr 30 00:34:34.245313 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:34:34.887953 ignition[899]: Ignition 2.19.0 Apr 30 00:34:34.891138 ignition[899]: Stage: fetch-offline Apr 30 00:34:34.891185 ignition[899]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:34.895889 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:34:34.891193 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:34.910409 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:34:34.891315 ignition[899]: parsed url from cmdline: "" Apr 30 00:34:34.891318 ignition[899]: no config URL provided Apr 30 00:34:34.891323 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:34:34.891330 ignition[899]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:34:34.891335 ignition[899]: failed to fetch config: resource requires networking Apr 30 00:34:34.891513 ignition[899]: Ignition finished successfully Apr 30 00:34:34.931642 ignition[909]: Ignition 2.19.0 Apr 30 00:34:34.931651 ignition[909]: Stage: fetch Apr 30 00:34:34.931820 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:34.931829 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:34.931927 ignition[909]: parsed url from cmdline: "" Apr 30 00:34:34.931931 ignition[909]: no config URL provided Apr 30 00:34:34.931935 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:34:34.931943 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:34:34.931964 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:34:35.035920 ignition[909]: GET result: OK Apr 30 00:34:35.036020 ignition[909]: config has been read from IMDS userdata Apr 30 00:34:35.036059 ignition[909]: parsing config with SHA512: 11edf2dcabeaa7949618bc60db00df7281d13e78ab1b6806666c2f42b369b8cb3bab5130068c9fbd8c64d359be243c58f6db5f11763111a213c2683954bb6278 Apr 30 00:34:35.039680 unknown[909]: fetched base config from "system" Apr 30 00:34:35.040055 ignition[909]: fetch: fetch complete Apr 30 00:34:35.039687 unknown[909]: fetched base config from "system" Apr 30 00:34:35.040059 ignition[909]: fetch: fetch passed Apr 30 00:34:35.039692 unknown[909]: fetched user config from "azure" Apr 30 00:34:35.040096 ignition[909]: Ignition finished successfully Apr 30 00:34:35.045031 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:34:35.064400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:34:35.091350 ignition[916]: Ignition 2.19.0 Apr 30 00:34:35.091363 ignition[916]: Stage: kargs Apr 30 00:34:35.091523 ignition[916]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:35.097932 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:34:35.091533 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:35.092448 ignition[916]: kargs: kargs passed Apr 30 00:34:35.092493 ignition[916]: Ignition finished successfully Apr 30 00:34:35.125411 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:34:35.143519 ignition[923]: Ignition 2.19.0 Apr 30 00:34:35.143531 ignition[923]: Stage: disks Apr 30 00:34:35.147966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:34:35.143697 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:35.154187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:34:35.143707 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:35.162288 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:34:35.144585 ignition[923]: disks: disks passed Apr 30 00:34:35.173488 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:34:35.144629 ignition[923]: Ignition finished successfully Apr 30 00:34:35.182949 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:34:35.193515 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:34:35.218461 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:34:35.293892 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:34:35.300850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:34:35.316468 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:34:35.372287 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:34:35.372671 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:34:35.377562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:34:35.395351 systemd-networkd[900]: enP30453s1: Gained IPv6LL Apr 30 00:34:35.429377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:34:35.439363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:34:35.446419 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:34:35.456498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:34:35.456532 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:34:35.483423 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:34:35.505293 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Apr 30 00:34:35.526864 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:35.526905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:35.526916 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:35.513782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:34:35.539277 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:35.542336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:34:35.907377 systemd-networkd[900]: eth0: Gained IPv6LL Apr 30 00:34:36.039907 coreos-metadata[944]: Apr 30 00:34:36.039 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:34:36.049492 coreos-metadata[944]: Apr 30 00:34:36.049 INFO Fetch successful Apr 30 00:34:36.054596 coreos-metadata[944]: Apr 30 00:34:36.054 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:34:36.067322 coreos-metadata[944]: Apr 30 00:34:36.065 INFO Fetch successful Apr 30 00:34:36.067322 coreos-metadata[944]: Apr 30 00:34:36.065 INFO wrote hostname ci-4081.3.3-a-cee67ba5b3 to /sysroot/etc/hostname Apr 30 00:34:36.066600 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:34:36.358504 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:34:36.427744 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:34:36.451235 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:34:36.459780 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:34:37.258981 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:34:37.274479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:34:37.283454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:34:37.304196 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:34:37.313697 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:37.331295 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:34:37.346490 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 00:34:37.346490 ignition[1061]: INFO : Stage: mount Apr 30 00:34:37.346490 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:37.346490 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:37.372619 ignition[1061]: INFO : mount: mount passed Apr 30 00:34:37.372619 ignition[1061]: INFO : Ignition finished successfully Apr 30 00:34:37.351187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:34:37.377503 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:34:37.394480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:34:37.420242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Apr 30 00:34:37.420292 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:37.425776 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:37.429761 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:37.436279 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:37.437488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:34:37.466302 ignition[1089]: INFO : Ignition 2.19.0 Apr 30 00:34:37.466302 ignition[1089]: INFO : Stage: files Apr 30 00:34:37.466302 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:37.466302 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:37.485783 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:34:37.499780 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:34:37.499780 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:34:37.589574 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:34:37.596827 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:34:37.596827 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:34:37.589978 unknown[1089]: wrote ssh authorized keys file for user: core Apr 30 00:34:37.625290 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:34:37.635137 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:34:37.673782 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:34:37.948966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:34:37.948966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:34:38.379623 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:34:38.584820 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:38.584820 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: files passed Apr 30 00:34:38.676448 ignition[1089]: INFO : Ignition finished successfully Apr 30 00:34:38.642559 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:34:38.686587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:34:38.702437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:34:38.716131 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:34:38.716231 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:34:38.779520 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.779520 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.796940 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.799297 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:34:38.811718 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:34:38.831581 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:34:38.857312 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:34:38.857421 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:34:38.869294 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:34:38.880698 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:34:38.891755 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:34:38.894439 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:34:38.932111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:34:38.947501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:34:38.963737 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:34:38.970146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:34:38.982776 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:34:38.993924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:34:38.994103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:34:39.010319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:34:39.016301 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:34:39.027738 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:34:39.039538 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:34:39.051098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:34:39.063865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:34:39.074999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:34:39.087079 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:34:39.097514 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:34:39.108965 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:34:39.118128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:34:39.118342 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:34:39.132577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:34:39.139922 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:34:39.161192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:34:39.169774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:34:39.180411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:34:39.180652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:34:39.198496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:34:39.198673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:34:39.210050 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:34:39.210204 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:34:39.222172 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:34:39.291194 ignition[1141]: INFO : Ignition 2.19.0 Apr 30 00:34:39.291194 ignition[1141]: INFO : Stage: umount Apr 30 00:34:39.291194 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:39.291194 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:39.291194 ignition[1141]: INFO : umount: umount passed Apr 30 00:34:39.291194 ignition[1141]: INFO : Ignition finished successfully Apr 30 00:34:39.222343 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:34:39.253588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:34:39.263964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:34:39.264253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:34:39.274566 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:34:39.290658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:34:39.290900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:34:39.298184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:34:39.298392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:34:39.313971 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:34:39.314220 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:34:39.326792 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:34:39.326906 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:34:39.338462 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:34:39.338561 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:34:39.351127 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:34:39.351187 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:34:39.364669 systemd[1]: Stopped target network.target - Network. Apr 30 00:34:39.379554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:34:39.379633 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:34:39.392785 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:34:39.402456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:34:39.410621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:34:39.417866 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:34:39.428121 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:34:39.438114 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:34:39.438182 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:34:39.449017 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:34:39.449070 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:34:39.461133 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:34:39.461189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:34:39.473337 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:34:39.473385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:34:39.483709 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:34:39.493924 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:34:39.504962 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:34:39.505602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:34:39.505709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:34:39.508603 systemd-networkd[900]: eth0: DHCPv6 lease lost Apr 30 00:34:39.768518 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: Data path switched from VF: enP30453s1 Apr 30 00:34:39.525004 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:34:39.528478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:34:39.537766 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:34:39.537859 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:34:39.552520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:34:39.552578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:34:39.573482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:34:39.583112 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:34:39.583190 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:34:39.593941 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:34:39.594001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:34:39.604940 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:34:39.604980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:34:39.618680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:34:39.618734 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:34:39.625531 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:34:39.664675 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:34:39.664839 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:34:39.676945 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:34:39.679135 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:34:39.687224 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:34:39.687310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:34:39.697557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:34:39.697599 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:34:39.708369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:34:39.708429 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:34:39.724256 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:34:39.724322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:34:39.733890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:34:39.733939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:39.752818 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:34:39.752883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:34:39.787523 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:34:39.803791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:34:39.803884 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:34:39.816004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:39.816068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:39.829387 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:34:39.829497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:34:39.892014 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:34:39.892159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:34:39.901948 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:34:39.933558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:34:40.060552 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 30 00:34:39.951288 systemd[1]: Switching root. Apr 30 00:34:40.064452 systemd-journald[217]: Journal stopped Apr 30 00:34:29.295102 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:34:29.295124 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:34:29.295132 kernel: KASLR enabled Apr 30 00:34:29.295138 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:34:29.295145 kernel: printk: bootconsole [pl11] enabled Apr 30 00:34:29.295151 kernel: efi: EFI v2.7 by EDK II Apr 30 00:34:29.295158 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:34:29.295164 kernel: random: crng init done Apr 30 00:34:29.295170 kernel: ACPI: Early table checksum verification disabled Apr 30 00:34:29.295175 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:34:29.295181 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295187 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295195 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:34:29.295201 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295208 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295214 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295221 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295228 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295235 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295241 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:34:29.295247 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:34:29.295254 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:34:29.295260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:34:29.295266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:34:29.295272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:34:29.295278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:34:29.295285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:34:29.295291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:34:29.295299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:34:29.295305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:34:29.295311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:34:29.295318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:34:29.295324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:34:29.295330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:34:29.295336 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Apr 30 00:34:29.295342 kernel: Zone ranges: Apr 30 00:34:29.295348 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:34:29.295355 kernel: DMA32 empty Apr 30 00:34:29.295361 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:34:29.295367 kernel: Movable zone start for each node Apr 30 00:34:29.295377 kernel: Early memory node ranges Apr 30 00:34:29.295384 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:34:29.295391 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:34:29.295397 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:34:29.295404 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:34:29.295412 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:34:29.295419 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:34:29.295425 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:34:29.295432 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:34:29.295439 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:34:29.295445 kernel: psci: probing for conduit method from ACPI. Apr 30 00:34:29.295452 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:34:29.295459 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:34:29.295465 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:34:29.295472 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:34:29.295479 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:34:29.295485 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:34:29.295493 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:34:29.295500 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:34:29.295507 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:34:29.295513 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:34:29.295520 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:34:29.295527 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:34:29.295533 kernel: CPU features: detected: Spectre-BHB Apr 30 00:34:29.295540 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:34:29.295547 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:34:29.295553 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:34:29.295560 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:34:29.295568 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:34:29.295574 kernel: alternatives: applying boot alternatives Apr 30 00:34:29.295582 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:34:29.295590 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:34:29.295596 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:34:29.295603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:34:29.295610 kernel: Fallback order for Node 0: 0 Apr 30 00:34:29.295616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:34:29.295623 kernel: Policy zone: Normal Apr 30 00:34:29.295629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:34:29.295636 kernel: software IO TLB: area num 2. Apr 30 00:34:29.295644 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:34:29.295651 kernel: Memory: 3982688K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211472K reserved, 0K cma-reserved) Apr 30 00:34:29.295658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:34:29.295665 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:34:29.295672 kernel: rcu: RCU event tracing is enabled. Apr 30 00:34:29.297707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:34:29.297736 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:34:29.297744 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:34:29.297751 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:34:29.297758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:34:29.297765 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:34:29.297777 kernel: GICv3: 960 SPIs implemented Apr 30 00:34:29.297784 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:34:29.297791 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:34:29.297797 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:34:29.297804 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:34:29.297811 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:34:29.297818 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:34:29.297825 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:34:29.297832 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:34:29.297839 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:34:29.297846 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:34:29.297855 kernel: Console: colour dummy device 80x25 Apr 30 00:34:29.297862 kernel: printk: console [tty1] enabled Apr 30 00:34:29.297869 kernel: ACPI: Core revision 20230628 Apr 30 00:34:29.297876 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:34:29.297883 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:34:29.297890 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:34:29.297897 kernel: landlock: Up and running. Apr 30 00:34:29.297904 kernel: SELinux: Initializing. Apr 30 00:34:29.297911 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.297919 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.297927 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:34:29.297934 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:34:29.297941 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:34:29.297948 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:34:29.297955 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:34:29.297962 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:34:29.297970 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:34:29.297983 kernel: Remapping and enabling EFI services. Apr 30 00:34:29.297991 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:34:29.297998 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:34:29.298005 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:34:29.298014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:34:29.298021 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:34:29.298028 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:34:29.298036 kernel: SMP: Total of 2 processors activated. Apr 30 00:34:29.298043 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:34:29.298052 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:34:29.298059 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:34:29.298067 kernel: CPU features: detected: CRC32 instructions Apr 30 00:34:29.298074 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:34:29.298081 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:34:29.298089 kernel: CPU features: detected: Privileged Access Never Apr 30 00:34:29.298096 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:34:29.298103 kernel: alternatives: applying system-wide alternatives Apr 30 00:34:29.298110 kernel: devtmpfs: initialized Apr 30 00:34:29.298120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:34:29.298127 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:34:29.298135 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:34:29.298142 kernel: SMBIOS 3.1.0 present. Apr 30 00:34:29.298149 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:34:29.298157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:34:29.298164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:34:29.298172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:34:29.298179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:34:29.298188 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:34:29.298195 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:34:29.298203 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:34:29.298210 kernel: cpuidle: using governor menu Apr 30 00:34:29.298217 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:34:29.298225 kernel: ASID allocator initialised with 32768 entries Apr 30 00:34:29.298232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:34:29.298239 kernel: Serial: AMBA PL011 UART driver Apr 30 00:34:29.298247 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:34:29.298256 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:34:29.298263 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:34:29.298270 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:34:29.298278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:34:29.298285 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:34:29.298292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:34:29.298299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:34:29.298307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:34:29.298314 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:34:29.298324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:34:29.298331 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:34:29.298338 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:34:29.298345 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:34:29.298353 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:34:29.298360 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:34:29.298368 kernel: ACPI: Interpreter enabled Apr 30 00:34:29.298375 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:34:29.298382 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:34:29.298391 kernel: printk: console [ttyAMA0] enabled Apr 30 00:34:29.298399 kernel: printk: bootconsole [pl11] disabled Apr 30 00:34:29.298406 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:34:29.298414 kernel: iommu: Default domain type: Translated Apr 30 00:34:29.298421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:34:29.298428 kernel: efivars: Registered efivars operations Apr 30 00:34:29.298435 kernel: vgaarb: loaded Apr 30 00:34:29.298443 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:34:29.298450 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:34:29.298461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:34:29.298468 kernel: pnp: PnP ACPI init Apr 30 00:34:29.298475 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:34:29.298482 kernel: NET: Registered PF_INET protocol family Apr 30 00:34:29.298490 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:34:29.298497 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:34:29.298505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:34:29.298512 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:34:29.298519 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:34:29.298529 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:34:29.298536 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.298543 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:34:29.298551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:34:29.298558 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:34:29.298565 kernel: kvm [1]: HYP mode not available Apr 30 00:34:29.298572 kernel: Initialise system trusted keyrings Apr 30 00:34:29.298580 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:34:29.298587 kernel: Key type asymmetric registered Apr 30 00:34:29.298596 kernel: Asymmetric key parser 'x509' registered Apr 30 00:34:29.298603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:34:29.298611 kernel: io scheduler mq-deadline registered Apr 30 00:34:29.298618 kernel: io scheduler kyber registered Apr 30 00:34:29.298625 kernel: io scheduler bfq registered Apr 30 00:34:29.298633 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:34:29.298640 kernel: thunder_xcv, ver 1.0 Apr 30 00:34:29.298647 kernel: thunder_bgx, ver 1.0 Apr 30 00:34:29.298654 kernel: nicpf, ver 1.0 Apr 30 00:34:29.298662 kernel: nicvf, ver 1.0 Apr 30 00:34:29.298824 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:34:29.298902 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:34:28 UTC (1745973268) Apr 30 00:34:29.298913 kernel: efifb: probing for efifb Apr 30 00:34:29.298920 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:34:29.298928 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:34:29.298935 kernel: efifb: scrolling: redraw Apr 30 00:34:29.298943 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:34:29.298953 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:34:29.298960 kernel: fb0: EFI VGA frame buffer device Apr 30 00:34:29.298967 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:34:29.298975 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:34:29.298982 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:34:29.298989 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:34:29.298997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:34:29.299004 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:34:29.299012 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:34:29.299020 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:34:29.299028 kernel: Segment Routing with IPv6 Apr 30 00:34:29.299035 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:34:29.299042 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:34:29.299050 kernel: Key type dns_resolver registered Apr 30 00:34:29.299057 kernel: registered taskstats version 1 Apr 30 00:34:29.299065 kernel: Loading compiled-in X.509 certificates Apr 30 00:34:29.299072 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:34:29.299080 kernel: Key type .fscrypt registered Apr 30 00:34:29.299089 kernel: Key type fscrypt-provisioning registered Apr 30 00:34:29.299096 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:34:29.299103 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:34:29.299111 kernel: ima: No architecture policies found Apr 30 00:34:29.299118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:34:29.299125 kernel: clk: Disabling unused clocks Apr 30 00:34:29.299133 kernel: Freeing unused kernel memory: 39424K Apr 30 00:34:29.299140 kernel: Run /init as init process Apr 30 00:34:29.299147 kernel: with arguments: Apr 30 00:34:29.299157 kernel: /init Apr 30 00:34:29.299164 kernel: with environment: Apr 30 00:34:29.299171 kernel: HOME=/ Apr 30 00:34:29.299178 kernel: TERM=linux Apr 30 00:34:29.299185 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:34:29.299194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:34:29.299204 systemd[1]: Detected virtualization microsoft. Apr 30 00:34:29.299212 systemd[1]: Detected architecture arm64. Apr 30 00:34:29.299221 systemd[1]: Running in initrd. Apr 30 00:34:29.299229 systemd[1]: No hostname configured, using default hostname. Apr 30 00:34:29.299237 systemd[1]: Hostname set to . Apr 30 00:34:29.299245 systemd[1]: Initializing machine ID from random generator. Apr 30 00:34:29.299253 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:34:29.299261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:34:29.299269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:34:29.299277 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:34:29.299287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:34:29.299295 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:34:29.299303 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:34:29.299313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:34:29.299321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:34:29.299329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:34:29.299337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:34:29.299346 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:34:29.299354 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:34:29.299362 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:34:29.299370 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:34:29.299377 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:34:29.299385 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:34:29.299393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:34:29.299401 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:34:29.299410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:34:29.299418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:34:29.299426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:34:29.299434 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:34:29.299442 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:34:29.299450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:34:29.299457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:34:29.299466 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:34:29.299474 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:34:29.299483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:34:29.299508 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:34:29.299528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:29.299537 systemd-journald[217]: Journal started Apr 30 00:34:29.299558 systemd-journald[217]: Runtime Journal (/run/log/journal/8a8310a0b78d4325bd847a0acc3ef377) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:34:29.309129 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:34:29.339390 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:34:29.339414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:34:29.347552 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:34:29.352797 kernel: Bridge firewalling registered Apr 30 00:34:29.348487 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:34:29.358476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:34:29.369875 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:34:29.380439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:34:29.391887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:29.408995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:29.417842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:34:29.435699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:34:29.451850 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:34:29.460566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:29.475710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:34:29.487944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:34:29.517121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:34:29.525846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:34:29.539704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:34:29.549750 dracut-cmdline[249]: dracut-dracut-053 Apr 30 00:34:29.557174 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:34:29.566525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:34:29.633887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:34:29.672790 systemd-resolved[271]: Positive Trust Anchors: Apr 30 00:34:29.672805 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:34:29.672836 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:34:29.679341 systemd-resolved[271]: Defaulting to hostname 'linux'. Apr 30 00:34:29.749971 kernel: SCSI subsystem initialized Apr 30 00:34:29.680249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:34:29.694709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:34:29.765437 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:34:29.772700 kernel: iscsi: registered transport (tcp) Apr 30 00:34:29.790879 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:34:29.790940 kernel: QLogic iSCSI HBA Driver Apr 30 00:34:29.828103 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:34:29.841895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:34:29.868157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:34:29.868196 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:34:29.874502 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:34:29.923712 kernel: raid6: neonx8 gen() 15755 MB/s Apr 30 00:34:29.943698 kernel: raid6: neonx4 gen() 15660 MB/s Apr 30 00:34:29.963696 kernel: raid6: neonx2 gen() 13231 MB/s Apr 30 00:34:29.984696 kernel: raid6: neonx1 gen() 10483 MB/s Apr 30 00:34:30.004695 kernel: raid6: int64x8 gen() 6953 MB/s Apr 30 00:34:30.024691 kernel: raid6: int64x4 gen() 7350 MB/s Apr 30 00:34:30.045697 kernel: raid6: int64x2 gen() 6131 MB/s Apr 30 00:34:30.069018 kernel: raid6: int64x1 gen() 5059 MB/s Apr 30 00:34:30.069039 kernel: raid6: using algorithm neonx8 gen() 15755 MB/s Apr 30 00:34:30.093741 kernel: raid6: .... xor() 11934 MB/s, rmw enabled Apr 30 00:34:30.093760 kernel: raid6: using neon recovery algorithm Apr 30 00:34:30.106226 kernel: xor: measuring software checksum speed Apr 30 00:34:30.106258 kernel: 8regs : 19759 MB/sec Apr 30 00:34:30.109710 kernel: 32regs : 19631 MB/sec Apr 30 00:34:30.113067 kernel: arm64_neon : 27007 MB/sec Apr 30 00:34:30.117103 kernel: xor: using function: arm64_neon (27007 MB/sec) Apr 30 00:34:30.168717 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:34:30.179351 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:34:30.194854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:34:30.218654 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 30 00:34:30.224371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:34:30.245999 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:34:30.263402 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 30 00:34:30.295496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:34:30.309925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:34:30.349447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:34:30.368940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:34:30.397403 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:34:30.407192 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:34:30.425272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:34:30.456655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:34:30.475743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:34:30.493939 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:34:30.487875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:34:30.509413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:34:30.509588 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:30.581846 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:34:30.581867 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:34:30.581887 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:34:30.581901 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:34:30.581910 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:34:30.581919 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 00:34:30.581928 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 00:34:30.581937 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:34:30.582076 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:34:30.582086 kernel: PTP clock support registered Apr 30 00:34:30.587857 kernel: scsi host1: storvsc_host_t Apr 30 00:34:30.587915 kernel: scsi host0: storvsc_host_t Apr 30 00:34:30.591310 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:30.604849 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:34:30.617700 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:34:30.616711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:30.617026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:30.624111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:30.654977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:30.686226 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:34:30.686251 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:34:30.678784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:30.708238 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:34:31.179595 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: VF slot 1 added Apr 30 00:34:31.179736 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:34:31.179747 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:34:31.179757 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:34:31.179766 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:34:31.179782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:34:31.179792 kernel: hv_pci 2a2a754e-76f5-450f-bf6e-4b19cf13862e: PCI VMBus probing: Using version 0x10004 Apr 30 00:34:31.290399 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:34:31.290528 kernel: hv_pci 2a2a754e-76f5-450f-bf6e-4b19cf13862e: PCI host bridge to bus 76f5:00 Apr 30 00:34:31.290614 kernel: pci_bus 76f5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:34:31.290710 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:34:31.290806 kernel: pci_bus 76f5:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:34:31.290882 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:34:31.290966 kernel: pci 76f5:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:34:31.291061 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:34:31.291145 kernel: pci 76f5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:34:31.291228 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:34:31.291352 kernel: pci 76f5:00:02.0: enabling Extended Tags Apr 30 00:34:31.291445 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:34:31.291532 kernel: pci 76f5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 76f5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:34:31.291618 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:31.291630 kernel: pci_bus 76f5:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:34:31.291709 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:34:31.291792 kernel: pci 76f5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:34:30.678881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:30.712946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:31.161699 systemd-resolved[271]: Clock change detected. Flushing caches. Apr 30 00:34:31.239374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:31.281468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:34:31.326359 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:31.352124 kernel: mlx5_core 76f5:00:02.0: enabling device (0000 -> 0002) Apr 30 00:34:31.645603 kernel: mlx5_core 76f5:00:02.0: firmware version: 16.31.2424 Apr 30 00:34:31.645738 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: VF registering: eth1 Apr 30 00:34:31.645835 kernel: mlx5_core 76f5:00:02.0 eth1: joined to eth0 Apr 30 00:34:31.645936 kernel: mlx5_core 76f5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:34:31.653292 kernel: mlx5_core 76f5:00:02.0 enP30453s1: renamed from eth1 Apr 30 00:34:31.816666 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:34:31.926288 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Apr 30 00:34:31.939790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:34:31.960512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:34:31.995002 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Apr 30 00:34:32.007249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:34:32.020287 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:34:32.041235 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:34:32.071308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:32.078291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:33.082243 disk-uuid[603]: The operation has completed successfully. Apr 30 00:34:33.090372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:34:33.136618 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:34:33.138297 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:34:33.169399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:34:33.181247 sh[716]: Success Apr 30 00:34:33.211496 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:34:33.397185 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:34:33.405288 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:34:33.426550 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:34:33.457049 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:34:33.457105 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:33.463676 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:34:33.468438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:34:33.472819 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:34:33.829493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:34:33.834542 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:34:33.854462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:34:33.862441 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:34:33.896861 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:33.896908 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:33.901068 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:33.923119 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:33.936377 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:34:33.941451 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:33.947835 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:34:33.954541 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:34:33.978540 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:34:33.991758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:34:34.027370 systemd-networkd[900]: lo: Link UP Apr 30 00:34:34.027379 systemd-networkd[900]: lo: Gained carrier Apr 30 00:34:34.028886 systemd-networkd[900]: Enumeration completed Apr 30 00:34:34.030809 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:34:34.031042 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:34.031045 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:34:34.036918 systemd[1]: Reached target network.target - Network. Apr 30 00:34:34.129285 kernel: mlx5_core 76f5:00:02.0 enP30453s1: Link up Apr 30 00:34:34.207310 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: Data path switched to VF: enP30453s1 Apr 30 00:34:34.207081 systemd-networkd[900]: enP30453s1: Link UP Apr 30 00:34:34.207164 systemd-networkd[900]: eth0: Link UP Apr 30 00:34:34.207287 systemd-networkd[900]: eth0: Gained carrier Apr 30 00:34:34.207296 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:34.230510 systemd-networkd[900]: enP30453s1: Gained carrier Apr 30 00:34:34.245313 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:34:34.887953 ignition[899]: Ignition 2.19.0 Apr 30 00:34:34.891138 ignition[899]: Stage: fetch-offline Apr 30 00:34:34.891185 ignition[899]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:34.895889 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:34:34.891193 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:34.910409 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:34:34.891315 ignition[899]: parsed url from cmdline: "" Apr 30 00:34:34.891318 ignition[899]: no config URL provided Apr 30 00:34:34.891323 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:34:34.891330 ignition[899]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:34:34.891335 ignition[899]: failed to fetch config: resource requires networking Apr 30 00:34:34.891513 ignition[899]: Ignition finished successfully Apr 30 00:34:34.931642 ignition[909]: Ignition 2.19.0 Apr 30 00:34:34.931651 ignition[909]: Stage: fetch Apr 30 00:34:34.931820 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:34.931829 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:34.931927 ignition[909]: parsed url from cmdline: "" Apr 30 00:34:34.931931 ignition[909]: no config URL provided Apr 30 00:34:34.931935 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:34:34.931943 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:34:34.931964 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:34:35.035920 ignition[909]: GET result: OK Apr 30 00:34:35.036020 ignition[909]: config has been read from IMDS userdata Apr 30 00:34:35.036059 ignition[909]: parsing config with SHA512: 11edf2dcabeaa7949618bc60db00df7281d13e78ab1b6806666c2f42b369b8cb3bab5130068c9fbd8c64d359be243c58f6db5f11763111a213c2683954bb6278 Apr 30 00:34:35.039680 unknown[909]: fetched base config from "system" Apr 30 00:34:35.040055 ignition[909]: fetch: fetch complete Apr 30 00:34:35.039687 unknown[909]: fetched base config from "system" Apr 30 00:34:35.040059 ignition[909]: fetch: fetch passed Apr 30 00:34:35.039692 unknown[909]: fetched user config from "azure" Apr 30 00:34:35.040096 ignition[909]: Ignition finished successfully Apr 30 00:34:35.045031 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:34:35.064400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:34:35.091350 ignition[916]: Ignition 2.19.0 Apr 30 00:34:35.091363 ignition[916]: Stage: kargs Apr 30 00:34:35.091523 ignition[916]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:35.097932 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:34:35.091533 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:35.092448 ignition[916]: kargs: kargs passed Apr 30 00:34:35.092493 ignition[916]: Ignition finished successfully Apr 30 00:34:35.125411 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:34:35.143519 ignition[923]: Ignition 2.19.0 Apr 30 00:34:35.143531 ignition[923]: Stage: disks Apr 30 00:34:35.147966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:34:35.143697 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:35.154187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:34:35.143707 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:35.162288 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:34:35.144585 ignition[923]: disks: disks passed Apr 30 00:34:35.173488 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:34:35.144629 ignition[923]: Ignition finished successfully Apr 30 00:34:35.182949 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:34:35.193515 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:34:35.218461 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:34:35.293892 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:34:35.300850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:34:35.316468 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:34:35.372287 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:34:35.372671 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:34:35.377562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:34:35.395351 systemd-networkd[900]: enP30453s1: Gained IPv6LL Apr 30 00:34:35.429377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:34:35.439363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:34:35.446419 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:34:35.456498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:34:35.456532 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:34:35.483423 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:34:35.505293 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Apr 30 00:34:35.526864 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:35.526905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:35.526916 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:35.513782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:34:35.539277 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:35.542336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:34:35.907377 systemd-networkd[900]: eth0: Gained IPv6LL Apr 30 00:34:36.039907 coreos-metadata[944]: Apr 30 00:34:36.039 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:34:36.049492 coreos-metadata[944]: Apr 30 00:34:36.049 INFO Fetch successful Apr 30 00:34:36.054596 coreos-metadata[944]: Apr 30 00:34:36.054 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:34:36.067322 coreos-metadata[944]: Apr 30 00:34:36.065 INFO Fetch successful Apr 30 00:34:36.067322 coreos-metadata[944]: Apr 30 00:34:36.065 INFO wrote hostname ci-4081.3.3-a-cee67ba5b3 to /sysroot/etc/hostname Apr 30 00:34:36.066600 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:34:36.358504 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:34:36.427744 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:34:36.451235 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:34:36.459780 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:34:37.258981 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:34:37.274479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:34:37.283454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:34:37.304196 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:34:37.313697 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:37.331295 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:34:37.346490 ignition[1061]: INFO : Ignition 2.19.0 Apr 30 00:34:37.346490 ignition[1061]: INFO : Stage: mount Apr 30 00:34:37.346490 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:37.346490 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:37.372619 ignition[1061]: INFO : mount: mount passed Apr 30 00:34:37.372619 ignition[1061]: INFO : Ignition finished successfully Apr 30 00:34:37.351187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:34:37.377503 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:34:37.394480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:34:37.420242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Apr 30 00:34:37.420292 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:34:37.425776 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:34:37.429761 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:34:37.436279 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:34:37.437488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:34:37.466302 ignition[1089]: INFO : Ignition 2.19.0 Apr 30 00:34:37.466302 ignition[1089]: INFO : Stage: files Apr 30 00:34:37.466302 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:37.466302 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:37.485783 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:34:37.499780 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:34:37.499780 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:34:37.589574 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:34:37.596827 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:34:37.596827 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:34:37.589978 unknown[1089]: wrote ssh authorized keys file for user: core Apr 30 00:34:37.625290 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:34:37.635137 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:34:37.673782 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:34:37.948966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:34:37.948966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:37.968328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:34:38.379623 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:34:38.584820 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:34:38.584820 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:34:38.629839 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:34:38.676448 ignition[1089]: INFO : files: files passed Apr 30 00:34:38.676448 ignition[1089]: INFO : Ignition finished successfully Apr 30 00:34:38.642559 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:34:38.686587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:34:38.702437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:34:38.716131 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:34:38.716231 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:34:38.779520 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.779520 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.796940 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:34:38.799297 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:34:38.811718 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:34:38.831581 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:34:38.857312 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:34:38.857421 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:34:38.869294 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:34:38.880698 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:34:38.891755 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:34:38.894439 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:34:38.932111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:34:38.947501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:34:38.963737 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:34:38.970146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:34:38.982776 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:34:38.993924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:34:38.994103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:34:39.010319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:34:39.016301 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:34:39.027738 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:34:39.039538 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:34:39.051098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:34:39.063865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:34:39.074999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:34:39.087079 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:34:39.097514 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:34:39.108965 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:34:39.118128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:34:39.118342 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:34:39.132577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:34:39.139922 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:34:39.161192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:34:39.169774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:34:39.180411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:34:39.180652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:34:39.198496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:34:39.198673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:34:39.210050 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:34:39.210204 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:34:39.222172 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:34:39.291194 ignition[1141]: INFO : Ignition 2.19.0 Apr 30 00:34:39.291194 ignition[1141]: INFO : Stage: umount Apr 30 00:34:39.291194 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:34:39.291194 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:34:39.291194 ignition[1141]: INFO : umount: umount passed Apr 30 00:34:39.291194 ignition[1141]: INFO : Ignition finished successfully Apr 30 00:34:39.222343 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:34:39.253588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:34:39.263964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:34:39.264253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:34:39.274566 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:34:39.290658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:34:39.290900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:34:39.298184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:34:39.298392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:34:39.313971 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:34:39.314220 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:34:39.326792 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:34:39.326906 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:34:39.338462 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:34:39.338561 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:34:39.351127 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:34:39.351187 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:34:39.364669 systemd[1]: Stopped target network.target - Network. Apr 30 00:34:39.379554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:34:39.379633 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:34:39.392785 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:34:39.402456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:34:39.410621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:34:39.417866 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:34:39.428121 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:34:39.438114 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:34:39.438182 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:34:39.449017 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:34:39.449070 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:34:39.461133 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:34:39.461189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:34:39.473337 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:34:39.473385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:34:39.483709 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:34:39.493924 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:34:39.504962 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:34:39.505602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:34:39.505709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:34:39.508603 systemd-networkd[900]: eth0: DHCPv6 lease lost Apr 30 00:34:39.768518 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: Data path switched from VF: enP30453s1 Apr 30 00:34:39.525004 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:34:39.528478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:34:39.537766 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:34:39.537859 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:34:39.552520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:34:39.552578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:34:39.573482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:34:39.583112 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:34:39.583190 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:34:39.593941 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:34:39.594001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:34:39.604940 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:34:39.604980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:34:39.618680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:34:39.618734 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:34:39.625531 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:34:39.664675 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:34:39.664839 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:34:39.676945 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:34:39.679135 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:34:39.687224 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:34:39.687310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:34:39.697557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:34:39.697599 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:34:39.708369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:34:39.708429 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:34:39.724256 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:34:39.724322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:34:39.733890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:34:39.733939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:34:39.752818 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:34:39.752883 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:34:39.787523 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:34:39.803791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:34:39.803884 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:34:39.816004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:34:39.816068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:39.829387 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:34:39.829497 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:34:39.892014 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:34:39.892159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:34:39.901948 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:34:39.933558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:34:40.060552 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 30 00:34:39.951288 systemd[1]: Switching root. Apr 30 00:34:40.064452 systemd-journald[217]: Journal stopped Apr 30 00:34:44.806860 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:34:44.806885 kernel: SELinux: policy capability open_perms=1 Apr 30 00:34:44.806895 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:34:44.806903 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:34:44.806913 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:34:44.806924 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:34:44.806933 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:34:44.806941 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:34:44.806949 kernel: audit: type=1403 audit(1745973281.415:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:34:44.806959 systemd[1]: Successfully loaded SELinux policy in 199.941ms. Apr 30 00:34:44.806972 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.345ms. Apr 30 00:34:44.806982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:34:44.806991 systemd[1]: Detected virtualization microsoft. Apr 30 00:34:44.806999 systemd[1]: Detected architecture arm64. Apr 30 00:34:44.807009 systemd[1]: Detected first boot. Apr 30 00:34:44.807020 systemd[1]: Hostname set to . Apr 30 00:34:44.807029 systemd[1]: Initializing machine ID from random generator. Apr 30 00:34:44.807038 zram_generator::config[1182]: No configuration found. Apr 30 00:34:44.807048 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:34:44.807056 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:34:44.807065 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:34:44.807075 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:34:44.807086 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:34:44.807095 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:34:44.807105 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:34:44.807114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:34:44.807124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:34:44.807133 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:34:44.807143 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:34:44.807153 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:34:44.807162 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:34:44.807172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:34:44.807181 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:34:44.807190 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:34:44.807200 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:34:44.807209 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:34:44.807218 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:34:44.807229 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:34:44.807238 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:34:44.807248 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:34:44.807273 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:34:44.807285 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:34:44.807295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:34:44.807305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:34:44.807314 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:34:44.807326 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:34:44.807336 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:34:44.807346 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:34:44.807356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:34:44.807365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:34:44.807375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:34:44.807386 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:34:44.807395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:34:44.807405 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:34:44.807414 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:34:44.807424 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:34:44.807433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:34:44.807443 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:34:44.807454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:34:44.807464 systemd[1]: Reached target machines.target - Containers. Apr 30 00:34:44.807474 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:34:44.807483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:34:44.807493 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:34:44.807503 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:34:44.807512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:34:44.807522 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:34:44.807533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:34:44.807542 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:34:44.807553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:34:44.807563 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:34:44.807572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:34:44.807582 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:34:44.807591 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:34:44.807601 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:34:44.807612 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:34:44.807621 kernel: loop: module loaded Apr 30 00:34:44.807630 kernel: fuse: init (API version 7.39) Apr 30 00:34:44.807639 kernel: ACPI: bus type drm_connector registered Apr 30 00:34:44.807648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:34:44.807657 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:34:44.807683 systemd-journald[1285]: Collecting audit messages is disabled. Apr 30 00:34:44.807709 systemd-journald[1285]: Journal started Apr 30 00:34:44.807730 systemd-journald[1285]: Runtime Journal (/run/log/journal/bf08a4881690492eb6b250b75131a233) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:34:43.701610 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:34:43.842044 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:34:43.842551 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:34:43.842899 systemd[1]: systemd-journald.service: Consumed 3.057s CPU time. Apr 30 00:34:44.833861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:34:44.847720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:34:44.857834 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:34:44.857891 systemd[1]: Stopped verity-setup.service. Apr 30 00:34:44.874277 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:34:44.875209 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:34:44.880932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:34:44.887062 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:34:44.892499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:34:44.898481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:34:44.906024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:34:44.913859 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:34:44.925309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:34:44.933083 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:34:44.933457 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:34:44.942405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:34:44.942662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:34:44.948981 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:34:44.949197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:34:44.955506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:34:44.955743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:34:44.962661 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:34:44.962870 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:34:44.968850 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:34:44.969105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:34:44.975405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:34:44.981868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:34:44.989300 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:34:44.997310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:34:45.011857 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:34:45.023373 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:34:45.030226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:34:45.036203 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:34:45.036241 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:34:45.042579 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:34:45.058485 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:34:45.066628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:34:45.072883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:34:45.111446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:34:45.118344 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:34:45.125031 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:34:45.127903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:34:45.136878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:34:45.139258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:34:45.148482 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:34:45.156488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:34:45.177449 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:34:45.188021 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:34:45.194737 systemd-journald[1285]: Time spent on flushing to /var/log/journal/bf08a4881690492eb6b250b75131a233 is 13.079ms for 897 entries. Apr 30 00:34:45.194737 systemd-journald[1285]: System Journal (/var/log/journal/bf08a4881690492eb6b250b75131a233) is 8.0M, max 2.6G, 2.6G free. Apr 30 00:34:45.230402 systemd-journald[1285]: Received client request to flush runtime journal. Apr 30 00:34:45.204027 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:34:45.214629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:34:45.224138 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:34:45.233059 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:34:45.247587 kernel: loop0: detected capacity change from 0 to 31320 Apr 30 00:34:45.249337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:34:45.256274 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:34:45.257336 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:34:45.268430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:34:45.326358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:34:45.338542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:34:45.346063 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:34:45.346962 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:34:45.461525 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Apr 30 00:34:45.461540 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Apr 30 00:34:45.465428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:34:45.585428 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:34:45.631736 kernel: loop1: detected capacity change from 0 to 201592 Apr 30 00:34:45.692303 kernel: loop2: detected capacity change from 0 to 114328 Apr 30 00:34:46.112288 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:34:46.304776 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:34:46.321545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:34:46.340942 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Apr 30 00:34:46.366869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:34:46.383357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:34:46.439491 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:34:46.456780 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:34:46.490882 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:34:46.495356 kernel: loop4: detected capacity change from 0 to 31320 Apr 30 00:34:46.512322 kernel: loop5: detected capacity change from 0 to 201592 Apr 30 00:34:46.538349 kernel: loop6: detected capacity change from 0 to 114328 Apr 30 00:34:46.553330 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:34:46.553416 kernel: loop7: detected capacity change from 0 to 114432 Apr 30 00:34:46.560395 (sd-merge)[1368]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 00:34:46.561135 (sd-merge)[1368]: Merged extensions into '/usr'. Apr 30 00:34:46.566684 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:34:46.566707 systemd[1]: Reloading... Apr 30 00:34:46.591288 kernel: hv_vmbus: registering driver hv_balloon Apr 30 00:34:46.605164 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 00:34:46.605249 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 30 00:34:46.636302 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 00:34:46.654603 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 00:34:46.654700 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 00:34:46.669277 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:34:46.669789 systemd-networkd[1346]: lo: Link UP Apr 30 00:34:46.670611 systemd-networkd[1346]: lo: Gained carrier Apr 30 00:34:46.675057 systemd-networkd[1346]: Enumeration completed Apr 30 00:34:46.681748 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:34:46.682110 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:46.682353 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:34:46.717321 zram_generator::config[1439]: No configuration found. Apr 30 00:34:46.742327 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1349) Apr 30 00:34:46.761282 kernel: mlx5_core 76f5:00:02.0 enP30453s1: Link up Apr 30 00:34:46.805298 kernel: hv_netvsc 000d3afb-ef49-000d-3afb-ef49000d3afb eth0: Data path switched to VF: enP30453s1 Apr 30 00:34:46.806579 systemd-networkd[1346]: enP30453s1: Link UP Apr 30 00:34:46.807046 systemd-networkd[1346]: eth0: Link UP Apr 30 00:34:46.807139 systemd-networkd[1346]: eth0: Gained carrier Apr 30 00:34:46.807159 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:34:46.811627 systemd-networkd[1346]: enP30453s1: Gained carrier Apr 30 00:34:46.826323 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:34:46.872840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:34:46.947007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:34:46.953830 systemd[1]: Reloading finished in 386 ms. Apr 30 00:34:46.984991 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:34:46.994130 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:34:47.038637 systemd[1]: Starting ensure-sysext.service... Apr 30 00:34:47.046619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:34:47.056392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:34:47.065593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:34:47.073592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:34:47.088553 systemd[1]: Reloading requested from client PID 1507 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:34:47.088696 systemd[1]: Reloading... Apr 30 00:34:47.107877 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:34:47.108136 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:34:47.110525 systemd-tmpfiles[1510]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:34:47.110743 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Apr 30 00:34:47.111448 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Apr 30 00:34:47.127908 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:34:47.127926 systemd-tmpfiles[1510]: Skipping /boot Apr 30 00:34:47.143799 systemd-tmpfiles[1510]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:34:47.144412 systemd-tmpfiles[1510]: Skipping /boot Apr 30 00:34:47.166285 zram_generator::config[1546]: No configuration found. Apr 30 00:34:47.274886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:34:47.352764 systemd[1]: Reloading finished in 263 ms. Apr 30 00:34:47.366833 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:34:47.380349 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:34:47.387861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:34:47.406570 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:34:47.415521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:34:47.427394 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:34:47.435076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:34:47.446747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:34:47.454734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:34:47.463245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:34:47.464501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:34:47.478590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:34:47.488629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:34:47.495929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:34:47.498555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:34:47.505952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:34:47.506093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:34:47.515659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:34:47.515806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:34:47.528862 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:34:47.529117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:34:47.538485 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:34:47.552349 lvm[1616]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:34:47.551505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:34:47.559840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:34:47.567572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:34:47.578552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:34:47.579345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:34:47.579512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:34:47.581951 systemd-resolved[1618]: Positive Trust Anchors: Apr 30 00:34:47.582251 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:34:47.582356 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:34:47.588069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:34:47.588244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:34:47.595443 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:34:47.606251 systemd-resolved[1618]: Using system hostname 'ci-4081.3.3-a-cee67ba5b3'. Apr 30 00:34:47.608963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:34:47.615539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:34:47.622408 systemd[1]: Reached target network.target - Network. Apr 30 00:34:47.627917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:34:47.634582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:34:47.641538 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:34:47.651811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:34:47.653742 lvm[1642]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:34:47.662675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:34:47.671564 augenrules[1645]: No rules Apr 30 00:34:47.673428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:34:47.682616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:34:47.691634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:34:47.691842 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:34:47.700023 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:34:47.708196 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:34:47.717341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:34:47.717614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:34:47.726823 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:34:47.726998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:34:47.735938 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:34:47.744089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:34:47.744229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:34:47.751531 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:34:47.751649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:34:47.763322 systemd[1]: Finished ensure-sysext.service. Apr 30 00:34:47.772124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:34:47.772200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:34:47.875488 systemd-networkd[1346]: enP30453s1: Gained IPv6LL Apr 30 00:34:48.083791 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:34:48.094019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:34:48.195470 systemd-networkd[1346]: eth0: Gained IPv6LL Apr 30 00:34:48.197938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:34:48.204903 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:34:51.614508 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:34:51.630248 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:34:51.642434 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:34:51.655629 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:34:51.661872 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:34:51.667920 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:34:51.674516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:34:51.681595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:34:51.687400 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:34:51.693948 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:34:51.700795 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:34:51.700835 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:34:51.705516 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:34:51.739909 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:34:51.747342 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:34:51.756885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:34:51.762845 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:34:51.768517 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:34:51.773371 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:34:51.778307 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:34:51.778335 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:34:51.785361 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 00:34:51.793410 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:34:51.806444 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:34:51.812888 (chronyd)[1668]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 00:34:51.823105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:34:51.832249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:34:51.839564 chronyd[1676]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 00:34:51.841572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:34:51.846834 chronyd[1676]: Timezone right/UTC failed leap second check, ignoring Apr 30 00:34:51.847045 chronyd[1676]: Loaded seccomp filter (level 2) Apr 30 00:34:51.847722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:34:51.847863 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 00:34:51.852331 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 00:34:51.858445 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 00:34:51.861172 jq[1674]: false Apr 30 00:34:51.867451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:34:51.870604 KVP[1678]: KVP starting; pid is:1678 Apr 30 00:34:51.875447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:34:51.883448 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:34:51.891418 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:34:51.899413 kernel: hv_utils: KVP IC version 4.0 Apr 30 00:34:51.899470 KVP[1678]: KVP LIC Version: 3.1 Apr 30 00:34:51.902485 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:34:51.911304 extend-filesystems[1677]: Found loop4 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found loop5 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found loop6 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found loop7 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda1 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda2 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda3 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found usr Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda4 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda6 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda7 Apr 30 00:34:51.911304 extend-filesystems[1677]: Found sda9 Apr 30 00:34:51.911304 extend-filesystems[1677]: Checking size of /dev/sda9 Apr 30 00:34:52.089327 extend-filesystems[1677]: Old size kept for /dev/sda9 Apr 30 00:34:52.089327 extend-filesystems[1677]: Found sr0 Apr 30 00:34:51.911830 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:34:52.013956 dbus-daemon[1671]: [system] SELinux support is enabled Apr 30 00:34:51.937391 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:34:51.952351 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:34:51.952823 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:34:52.129172 update_engine[1699]: I20250430 00:34:52.072482 1699 main.cc:92] Flatcar Update Engine starting Apr 30 00:34:52.129172 update_engine[1699]: I20250430 00:34:52.073726 1699 update_check_scheduler.cc:74] Next update check in 3m22s Apr 30 00:34:51.960561 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:34:52.129532 jq[1702]: true Apr 30 00:34:51.976521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:34:51.987396 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 00:34:52.010755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:34:52.010909 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:34:52.011164 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:34:52.011386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:34:52.021704 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:34:52.040811 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:34:52.042328 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:34:52.060824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:34:52.075013 systemd-logind[1694]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:34:52.077656 systemd-logind[1694]: New seat seat0. Apr 30 00:34:52.080518 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:34:52.127676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:34:52.127848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:34:52.140290 coreos-metadata[1670]: Apr 30 00:34:52.139 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:34:52.148258 coreos-metadata[1670]: Apr 30 00:34:52.146 INFO Fetch successful Apr 30 00:34:52.148258 coreos-metadata[1670]: Apr 30 00:34:52.146 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 00:34:52.148258 coreos-metadata[1670]: Apr 30 00:34:52.148 INFO Fetch successful Apr 30 00:34:52.148258 coreos-metadata[1670]: Apr 30 00:34:52.148 INFO Fetching http://168.63.129.16/machine/2314dbb9-689c-441e-9c08-6ed5d8dcfdd5/7e165ae4%2D2e03%2D421e%2Dab74%2D2c0929257eff.%5Fci%2D4081.3.3%2Da%2Dcee67ba5b3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 00:34:52.154340 coreos-metadata[1670]: Apr 30 00:34:52.149 INFO Fetch successful Apr 30 00:34:52.154340 coreos-metadata[1670]: Apr 30 00:34:52.150 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:34:52.164567 (ntainerd)[1728]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:34:52.165385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:34:52.165879 dbus-daemon[1671]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:34:52.165422 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:34:52.172817 coreos-metadata[1670]: Apr 30 00:34:52.172 INFO Fetch successful Apr 30 00:34:52.179271 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:34:52.179296 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:34:52.193298 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1722) Apr 30 00:34:52.208399 jq[1727]: true Apr 30 00:34:52.223707 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:34:52.224862 tar[1725]: linux-arm64/LICENSE Apr 30 00:34:52.226183 tar[1725]: linux-arm64/helm Apr 30 00:34:52.250302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:34:52.275314 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:34:52.286426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:34:52.391536 bash[1782]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:34:52.393634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:34:52.410965 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:34:52.653379 locksmithd[1756]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:34:53.120281 containerd[1728]: time="2025-04-30T00:34:53.118559900Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:34:53.141611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:34:53.149478 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:34:53.156810 tar[1725]: linux-arm64/README.md Apr 30 00:34:53.170327 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:34:53.195504 containerd[1728]: time="2025-04-30T00:34:53.195445620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.197190 containerd[1728]: time="2025-04-30T00:34:53.197144260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:34:53.197190 containerd[1728]: time="2025-04-30T00:34:53.197181780Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:34:53.197326 containerd[1728]: time="2025-04-30T00:34:53.197198380Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:34:53.198359 containerd[1728]: time="2025-04-30T00:34:53.198331380Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:34:53.198391 containerd[1728]: time="2025-04-30T00:34:53.198370620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.198458 containerd[1728]: time="2025-04-30T00:34:53.198437780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:34:53.198486 containerd[1728]: time="2025-04-30T00:34:53.198455980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199368 containerd[1728]: time="2025-04-30T00:34:53.199341940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199400 containerd[1728]: time="2025-04-30T00:34:53.199368460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199429 containerd[1728]: time="2025-04-30T00:34:53.199405820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199429 containerd[1728]: time="2025-04-30T00:34:53.199419220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199532 containerd[1728]: time="2025-04-30T00:34:53.199510380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199743 containerd[1728]: time="2025-04-30T00:34:53.199719900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199855 containerd[1728]: time="2025-04-30T00:34:53.199833940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:34:53.199855 containerd[1728]: time="2025-04-30T00:34:53.199852860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:34:53.199960 containerd[1728]: time="2025-04-30T00:34:53.199940940Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:34:53.200008 containerd[1728]: time="2025-04-30T00:34:53.199992180Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:34:53.219218 containerd[1728]: time="2025-04-30T00:34:53.219170540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:34:53.219319 containerd[1728]: time="2025-04-30T00:34:53.219280300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:34:53.219319 containerd[1728]: time="2025-04-30T00:34:53.219301620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:34:53.219372 containerd[1728]: time="2025-04-30T00:34:53.219321780Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:34:53.219372 containerd[1728]: time="2025-04-30T00:34:53.219339340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:34:53.219631 containerd[1728]: time="2025-04-30T00:34:53.219510300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219766580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219870660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219887460Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219904020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219917340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219930220Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219943580Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219958100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219973500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219985140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.219997180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.220009420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.220028580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220286 containerd[1728]: time="2025-04-30T00:34:53.220042900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220055180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220070620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220082620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220095820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220107260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220123060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220135460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220151460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220163220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220178460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220190580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220210940Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220239180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220252300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220620 containerd[1728]: time="2025-04-30T00:34:53.220286660Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220348260Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220367380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220378460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220391940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220401580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220413540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220422980Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:34:53.220871 containerd[1728]: time="2025-04-30T00:34:53.220433060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:34:53.221011 containerd[1728]: time="2025-04-30T00:34:53.220708340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:34:53.221011 containerd[1728]: time="2025-04-30T00:34:53.220768140Z" level=info msg="Connect containerd service" Apr 30 00:34:53.221011 containerd[1728]: time="2025-04-30T00:34:53.220792220Z" level=info msg="using legacy CRI server" Apr 30 00:34:53.221011 containerd[1728]: time="2025-04-30T00:34:53.220798660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:34:53.221011 containerd[1728]: time="2025-04-30T00:34:53.220892300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:34:53.222756 containerd[1728]: time="2025-04-30T00:34:53.222718180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.222861780Z" level=info msg="Start subscribing containerd event" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.222914740Z" level=info msg="Start recovering state" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.222979540Z" level=info msg="Start event monitor" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.222999860Z" level=info msg="Start snapshots syncer" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.223009580Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:34:53.223184 containerd[1728]: time="2025-04-30T00:34:53.223017620Z" level=info msg="Start streaming server" Apr 30 00:34:53.223662 containerd[1728]: time="2025-04-30T00:34:53.223578940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:34:53.225093 containerd[1728]: time="2025-04-30T00:34:53.223676540Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:34:53.223829 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:34:53.230696 containerd[1728]: time="2025-04-30T00:34:53.230658580Z" level=info msg="containerd successfully booted in 0.114960s" Apr 30 00:34:53.597726 kubelet[1803]: E0430 00:34:53.597611 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:34:53.600192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:34:53.600344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:34:54.337062 sshd_keygen[1703]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:34:54.356204 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:34:54.367617 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:34:54.375554 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 00:34:54.381697 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:34:54.383303 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:34:54.397719 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:34:54.404247 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 00:34:54.476985 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:34:54.490583 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:34:54.496993 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:34:54.504316 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:34:54.509551 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:34:54.515313 systemd[1]: Startup finished in 665ms (kernel) + 12.035s (initrd) + 13.297s (userspace) = 25.999s. Apr 30 00:34:55.477894 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:34:55.484225 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:34:55.490220 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:34:55.496525 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:34:55.499336 systemd-logind[1694]: New session 2 of user core. Apr 30 00:34:55.502850 systemd-logind[1694]: New session 1 of user core. Apr 30 00:34:55.509126 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:34:55.515729 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:34:55.518828 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:34:55.649623 systemd[1844]: Queued start job for default target default.target. Apr 30 00:34:55.657222 systemd[1844]: Created slice app.slice - User Application Slice. Apr 30 00:34:55.657255 systemd[1844]: Reached target paths.target - Paths. Apr 30 00:34:55.657282 systemd[1844]: Reached target timers.target - Timers. Apr 30 00:34:55.658477 systemd[1844]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:34:55.668233 systemd[1844]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:34:55.668324 systemd[1844]: Reached target sockets.target - Sockets. Apr 30 00:34:55.668337 systemd[1844]: Reached target basic.target - Basic System. Apr 30 00:34:55.668380 systemd[1844]: Reached target default.target - Main User Target. Apr 30 00:34:55.668407 systemd[1844]: Startup finished in 143ms. Apr 30 00:34:55.668710 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:34:55.679485 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:34:55.680210 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:35:00.402282 waagent[1833]: 2025-04-30T00:35:00.402019Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 00:35:00.407708 waagent[1833]: 2025-04-30T00:35:00.407634Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 00:35:00.412083 waagent[1833]: 2025-04-30T00:35:00.412029Z INFO Daemon Daemon Python: 3.11.9 Apr 30 00:35:00.416703 waagent[1833]: 2025-04-30T00:35:00.416511Z INFO Daemon Daemon Run daemon Apr 30 00:35:00.420732 waagent[1833]: 2025-04-30T00:35:00.420671Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 00:35:00.430255 waagent[1833]: 2025-04-30T00:35:00.430189Z INFO Daemon Daemon Using waagent for provisioning Apr 30 00:35:00.435405 waagent[1833]: 2025-04-30T00:35:00.435356Z INFO Daemon Daemon Activate resource disk Apr 30 00:35:00.440062 waagent[1833]: 2025-04-30T00:35:00.440009Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 00:35:00.451701 waagent[1833]: 2025-04-30T00:35:00.451635Z INFO Daemon Daemon Found device: None Apr 30 00:35:00.455891 waagent[1833]: 2025-04-30T00:35:00.455833Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 00:35:00.464067 waagent[1833]: 2025-04-30T00:35:00.464001Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 00:35:00.476921 waagent[1833]: 2025-04-30T00:35:00.476855Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:35:00.482590 waagent[1833]: 2025-04-30T00:35:00.482532Z INFO Daemon Daemon Running default provisioning handler Apr 30 00:35:00.494416 waagent[1833]: 2025-04-30T00:35:00.494342Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 00:35:00.507419 waagent[1833]: 2025-04-30T00:35:00.507357Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 00:35:00.516899 waagent[1833]: 2025-04-30T00:35:00.516829Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 00:35:00.522206 waagent[1833]: 2025-04-30T00:35:00.522132Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 00:35:01.890004 waagent[1833]: 2025-04-30T00:35:01.889899Z INFO Daemon Daemon Successfully mounted dvd Apr 30 00:35:01.929773 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 00:35:01.931717 waagent[1833]: 2025-04-30T00:35:01.931558Z INFO Daemon Daemon Detect protocol endpoint Apr 30 00:35:01.937161 waagent[1833]: 2025-04-30T00:35:01.937097Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:35:01.943583 waagent[1833]: 2025-04-30T00:35:01.943522Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 00:35:01.949933 waagent[1833]: 2025-04-30T00:35:01.949879Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 00:35:01.955134 waagent[1833]: 2025-04-30T00:35:01.955083Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 00:35:01.959868 waagent[1833]: 2025-04-30T00:35:01.959821Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 00:35:02.081487 waagent[1833]: 2025-04-30T00:35:02.081436Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 00:35:02.087827 waagent[1833]: 2025-04-30T00:35:02.087791Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 00:35:02.092970 waagent[1833]: 2025-04-30T00:35:02.092923Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 00:35:02.489427 waagent[1833]: 2025-04-30T00:35:02.489322Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 00:35:02.495502 waagent[1833]: 2025-04-30T00:35:02.495437Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 00:35:02.504929 waagent[1833]: 2025-04-30T00:35:02.504878Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:35:02.555255 waagent[1833]: 2025-04-30T00:35:02.555204Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 00:35:02.560812 waagent[1833]: 2025-04-30T00:35:02.560763Z INFO Daemon Apr 30 00:35:02.563527 waagent[1833]: 2025-04-30T00:35:02.563483Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 13be9ab6-68b9-4d66-9e6c-cb2c44bc9b7e eTag: 12896034219073881606 source: Fabric] Apr 30 00:35:02.574372 waagent[1833]: 2025-04-30T00:35:02.574321Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 00:35:02.580896 waagent[1833]: 2025-04-30T00:35:02.580848Z INFO Daemon Apr 30 00:35:02.583659 waagent[1833]: 2025-04-30T00:35:02.583610Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:35:02.596528 waagent[1833]: 2025-04-30T00:35:02.596493Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 00:35:02.698128 waagent[1833]: 2025-04-30T00:35:02.698039Z INFO Daemon Downloaded certificate {'thumbprint': '292B70BB830BC9829A5491DB3DCA4DF7401346BA', 'hasPrivateKey': False} Apr 30 00:35:02.707931 waagent[1833]: 2025-04-30T00:35:02.707881Z INFO Daemon Downloaded certificate {'thumbprint': '245F3B677B442752676A2EB8E5B9DC365247CA6B', 'hasPrivateKey': True} Apr 30 00:35:02.717349 waagent[1833]: 2025-04-30T00:35:02.717302Z INFO Daemon Fetch goal state completed Apr 30 00:35:02.728456 waagent[1833]: 2025-04-30T00:35:02.728413Z INFO Daemon Daemon Starting provisioning Apr 30 00:35:02.733393 waagent[1833]: 2025-04-30T00:35:02.733331Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 00:35:02.737828 waagent[1833]: 2025-04-30T00:35:02.737782Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-cee67ba5b3] Apr 30 00:35:03.034281 waagent[1833]: 2025-04-30T00:35:03.032891Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-cee67ba5b3] Apr 30 00:35:03.039598 waagent[1833]: 2025-04-30T00:35:03.039538Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 00:35:03.045548 waagent[1833]: 2025-04-30T00:35:03.045501Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 00:35:03.079234 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:35:03.079282 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:35:03.079312 systemd-networkd[1346]: eth0: DHCP lease lost Apr 30 00:35:03.080991 waagent[1833]: 2025-04-30T00:35:03.080849Z INFO Daemon Daemon Create user account if not exists Apr 30 00:35:03.087298 waagent[1833]: 2025-04-30T00:35:03.087222Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 00:35:03.092931 waagent[1833]: 2025-04-30T00:35:03.092870Z INFO Daemon Daemon Configure sudoer Apr 30 00:35:03.093035 systemd-networkd[1346]: eth0: DHCPv6 lease lost Apr 30 00:35:03.097625 waagent[1833]: 2025-04-30T00:35:03.097561Z INFO Daemon Daemon Configure sshd Apr 30 00:35:03.102195 waagent[1833]: 2025-04-30T00:35:03.102132Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 00:35:03.115668 waagent[1833]: 2025-04-30T00:35:03.115543Z INFO Daemon Daemon Deploy ssh public key. Apr 30 00:35:03.122340 systemd-networkd[1346]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:35:03.850741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:35:03.861472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:04.059496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:04.069691 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:04.128971 kubelet[1910]: E0430 00:35:04.128846 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:04.131949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:04.132092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:04.458247 waagent[1833]: 2025-04-30T00:35:04.458142Z INFO Daemon Daemon Provisioning complete Apr 30 00:35:04.476719 waagent[1833]: 2025-04-30T00:35:04.476669Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 00:35:04.483065 waagent[1833]: 2025-04-30T00:35:04.483009Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 00:35:04.493126 waagent[1833]: 2025-04-30T00:35:04.493074Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 00:35:04.624116 waagent[1917]: 2025-04-30T00:35:04.623475Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 00:35:04.624116 waagent[1917]: 2025-04-30T00:35:04.623625Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 00:35:04.624116 waagent[1917]: 2025-04-30T00:35:04.623677Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 00:35:07.880526 waagent[1917]: 2025-04-30T00:35:07.880244Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 00:35:07.880871 waagent[1917]: 2025-04-30T00:35:07.880567Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:35:07.880871 waagent[1917]: 2025-04-30T00:35:07.880641Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:35:07.889650 waagent[1917]: 2025-04-30T00:35:07.889574Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:35:07.899712 waagent[1917]: 2025-04-30T00:35:07.899662Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 00:35:07.900273 waagent[1917]: 2025-04-30T00:35:07.900225Z INFO ExtHandler Apr 30 00:35:07.900371 waagent[1917]: 2025-04-30T00:35:07.900336Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 09b1547a-8a7d-4e32-87ae-8fb139b343af eTag: 12896034219073881606 source: Fabric] Apr 30 00:35:07.900686 waagent[1917]: 2025-04-30T00:35:07.900644Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 00:35:07.989420 waagent[1917]: 2025-04-30T00:35:07.989318Z INFO ExtHandler Apr 30 00:35:07.989525 waagent[1917]: 2025-04-30T00:35:07.989496Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:35:07.993988 waagent[1917]: 2025-04-30T00:35:07.993945Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 00:35:08.305671 waagent[1917]: 2025-04-30T00:35:08.305518Z INFO ExtHandler Downloaded certificate {'thumbprint': '292B70BB830BC9829A5491DB3DCA4DF7401346BA', 'hasPrivateKey': False} Apr 30 00:35:08.306077 waagent[1917]: 2025-04-30T00:35:08.306030Z INFO ExtHandler Downloaded certificate {'thumbprint': '245F3B677B442752676A2EB8E5B9DC365247CA6B', 'hasPrivateKey': True} Apr 30 00:35:08.306548 waagent[1917]: 2025-04-30T00:35:08.306495Z INFO ExtHandler Fetch goal state completed Apr 30 00:35:08.322982 waagent[1917]: 2025-04-30T00:35:08.322911Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1917 Apr 30 00:35:08.323155 waagent[1917]: 2025-04-30T00:35:08.323117Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 00:35:08.324928 waagent[1917]: 2025-04-30T00:35:08.324877Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 00:35:08.325346 waagent[1917]: 2025-04-30T00:35:08.325309Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 00:35:08.379322 waagent[1917]: 2025-04-30T00:35:08.379275Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 00:35:08.379535 waagent[1917]: 2025-04-30T00:35:08.379494Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 00:35:08.386165 waagent[1917]: 2025-04-30T00:35:08.385647Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 00:35:08.392743 systemd[1]: Reloading requested from client PID 1935 ('systemctl') (unit waagent.service)... Apr 30 00:35:08.393038 systemd[1]: Reloading... Apr 30 00:35:08.469642 zram_generator::config[1969]: No configuration found. Apr 30 00:35:08.579201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:35:08.654569 systemd[1]: Reloading finished in 261 ms. Apr 30 00:35:08.671992 waagent[1917]: 2025-04-30T00:35:08.671467Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 00:35:08.677700 systemd[1]: Reloading requested from client PID 2023 ('systemctl') (unit waagent.service)... Apr 30 00:35:08.677717 systemd[1]: Reloading... Apr 30 00:35:08.764322 zram_generator::config[2063]: No configuration found. Apr 30 00:35:08.847218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:35:08.922996 systemd[1]: Reloading finished in 244 ms. Apr 30 00:35:08.946313 waagent[1917]: 2025-04-30T00:35:08.945554Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 00:35:08.946313 waagent[1917]: 2025-04-30T00:35:08.945784Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 00:35:09.732302 waagent[1917]: 2025-04-30T00:35:09.731900Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 00:35:09.732612 waagent[1917]: 2025-04-30T00:35:09.732558Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 00:35:09.733475 waagent[1917]: 2025-04-30T00:35:09.733383Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 00:35:09.733948 waagent[1917]: 2025-04-30T00:35:09.733851Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 00:35:09.734956 waagent[1917]: 2025-04-30T00:35:09.734185Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:35:09.734956 waagent[1917]: 2025-04-30T00:35:09.734291Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:35:09.734956 waagent[1917]: 2025-04-30T00:35:09.734512Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 00:35:09.734956 waagent[1917]: 2025-04-30T00:35:09.734687Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 00:35:09.734956 waagent[1917]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 00:35:09.734956 waagent[1917]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 00:35:09.734956 waagent[1917]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 00:35:09.734956 waagent[1917]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:35:09.734956 waagent[1917]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:35:09.734956 waagent[1917]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:35:09.735358 waagent[1917]: 2025-04-30T00:35:09.735283Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 00:35:09.735492 waagent[1917]: 2025-04-30T00:35:09.735445Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 00:35:09.736010 waagent[1917]: 2025-04-30T00:35:09.735937Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 00:35:09.736162 waagent[1917]: 2025-04-30T00:35:09.736118Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 00:35:09.736326 waagent[1917]: 2025-04-30T00:35:09.736284Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:35:09.736443 waagent[1917]: 2025-04-30T00:35:09.736395Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 00:35:09.736757 waagent[1917]: 2025-04-30T00:35:09.736715Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:35:09.737144 waagent[1917]: 2025-04-30T00:35:09.737089Z INFO EnvHandler ExtHandler Configure routes Apr 30 00:35:09.738158 waagent[1917]: 2025-04-30T00:35:09.738059Z INFO EnvHandler ExtHandler Gateway:None Apr 30 00:35:09.738642 waagent[1917]: 2025-04-30T00:35:09.738592Z INFO EnvHandler ExtHandler Routes:None Apr 30 00:35:09.744323 waagent[1917]: 2025-04-30T00:35:09.744255Z INFO ExtHandler ExtHandler Apr 30 00:35:09.744836 waagent[1917]: 2025-04-30T00:35:09.744785Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7ef26331-5aed-4468-858f-666ec78654a7 correlation 8cda0e5a-9414-4ea2-ba47-266b079a2f2f created: 2025-04-30T00:33:39.270195Z] Apr 30 00:35:09.745646 waagent[1917]: 2025-04-30T00:35:09.745604Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 00:35:09.747029 waagent[1917]: 2025-04-30T00:35:09.746291Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 30 00:35:09.786105 waagent[1917]: 2025-04-30T00:35:09.786031Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B22CA707-9F1A-43BB-83BE-AB451CBF206A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 00:35:09.999030 waagent[1917]: 2025-04-30T00:35:09.998505Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 00:35:09.999030 waagent[1917]: Executing ['ip', '-a', '-o', 'link']: Apr 30 00:35:09.999030 waagent[1917]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 00:35:09.999030 waagent[1917]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:ef:49 brd ff:ff:ff:ff:ff:ff Apr 30 00:35:09.999030 waagent[1917]: 3: enP30453s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:ef:49 brd ff:ff:ff:ff:ff:ff\ altname enP30453p0s2 Apr 30 00:35:09.999030 waagent[1917]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 00:35:09.999030 waagent[1917]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 00:35:09.999030 waagent[1917]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 00:35:09.999030 waagent[1917]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 00:35:09.999030 waagent[1917]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 00:35:09.999030 waagent[1917]: 2: eth0 inet6 fe80::20d:3aff:fefb:ef49/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:35:09.999030 waagent[1917]: 3: enP30453s1 inet6 fe80::20d:3aff:fefb:ef49/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:35:10.332349 waagent[1917]: 2025-04-30T00:35:10.332028Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 00:35:10.332349 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.332349 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.332349 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.332349 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.332349 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.332349 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.332349 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:35:10.332349 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:35:10.332349 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:35:10.335353 waagent[1917]: 2025-04-30T00:35:10.335237Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 00:35:10.335353 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.335353 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.335353 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.335353 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.335353 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:35:10.335353 waagent[1917]: pkts bytes target prot opt in out source destination Apr 30 00:35:10.335353 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:35:10.335353 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:35:10.335353 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:35:10.335624 waagent[1917]: 2025-04-30T00:35:10.335591Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 00:35:14.153041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:35:14.160471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:14.265216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:14.269552 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:14.377750 kubelet[2152]: E0430 00:35:14.377692 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:14.380349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:14.380496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:15.635774 chronyd[1676]: Selected source PHC0 Apr 30 00:35:24.403125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:35:24.411448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:24.741082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:24.754617 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:24.796313 kubelet[2168]: E0430 00:35:24.796242 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:24.798912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:24.799060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:31.470245 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:35:31.476508 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:40048.service - OpenSSH per-connection server daemon (10.200.16.10:40048). Apr 30 00:35:32.030580 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 40048 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:32.031913 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:32.036033 systemd-logind[1694]: New session 3 of user core. Apr 30 00:35:32.047414 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:35:32.441465 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:40064.service - OpenSSH per-connection server daemon (10.200.16.10:40064). Apr 30 00:35:32.889983 sshd[2181]: Accepted publickey for core from 10.200.16.10 port 40064 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:32.892462 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:32.896148 systemd-logind[1694]: New session 4 of user core. Apr 30 00:35:32.903415 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:35:33.216770 sshd[2181]: pam_unix(sshd:session): session closed for user core Apr 30 00:35:33.220026 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:35:33.220899 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:40064.service: Deactivated successfully. Apr 30 00:35:33.223495 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:35:33.224278 systemd-logind[1694]: Removed session 4. Apr 30 00:35:33.304677 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:40070.service - OpenSSH per-connection server daemon (10.200.16.10:40070). Apr 30 00:35:33.754091 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 40070 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:33.755439 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:33.758960 systemd-logind[1694]: New session 5 of user core. Apr 30 00:35:33.769476 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:35:34.078331 sshd[2188]: pam_unix(sshd:session): session closed for user core Apr 30 00:35:34.082014 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:35:34.082228 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:40070.service: Deactivated successfully. Apr 30 00:35:34.083903 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:35:34.086294 systemd-logind[1694]: Removed session 5. Apr 30 00:35:34.159425 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:40072.service - OpenSSH per-connection server daemon (10.200.16.10:40072). Apr 30 00:35:34.603427 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 40072 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:34.604968 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:34.609019 systemd-logind[1694]: New session 6 of user core. Apr 30 00:35:34.615427 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:35:34.703965 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 30 00:35:34.858087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:35:34.861434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:34.929507 sshd[2195]: pam_unix(sshd:session): session closed for user core Apr 30 00:35:34.932298 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:35:34.932720 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:40072.service: Deactivated successfully. Apr 30 00:35:34.934761 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:35:34.936327 systemd-logind[1694]: Removed session 6. Apr 30 00:35:35.014727 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:40078.service - OpenSSH per-connection server daemon (10.200.16.10:40078). Apr 30 00:35:35.303455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:35.312637 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:35.349966 kubelet[2212]: E0430 00:35:35.349897 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:35.352555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:35.352809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:35.457111 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 40078 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:35.458465 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:35.463295 systemd-logind[1694]: New session 7 of user core. Apr 30 00:35:35.468456 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:35:37.519860 update_engine[1699]: I20250430 00:35:37.519248 1699 update_attempter.cc:509] Updating boot flags... Apr 30 00:35:38.120500 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2233) Apr 30 00:35:38.162687 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:35:38.163010 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:35:38.200461 sudo[2220]: pam_unix(sudo:session): session closed for user root Apr 30 00:35:38.271113 sshd[2205]: pam_unix(sshd:session): session closed for user core Apr 30 00:35:38.275226 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:40078.service: Deactivated successfully. Apr 30 00:35:38.276768 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:35:38.277506 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:35:38.278410 systemd-logind[1694]: Removed session 7. Apr 30 00:35:38.362892 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:40092.service - OpenSSH per-connection server daemon (10.200.16.10:40092). Apr 30 00:35:38.836958 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 40092 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:38.838374 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:38.842990 systemd-logind[1694]: New session 8 of user core. Apr 30 00:35:38.848472 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:35:39.106766 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:35:39.107033 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:35:39.110552 sudo[2268]: pam_unix(sudo:session): session closed for user root Apr 30 00:35:39.114965 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:35:39.115214 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:35:39.129608 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:35:39.130660 auditctl[2271]: No rules Apr 30 00:35:39.130960 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:35:39.131123 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:35:39.133678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:35:39.156978 augenrules[2289]: No rules Apr 30 00:35:39.158399 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:35:39.159687 sudo[2267]: pam_unix(sudo:session): session closed for user root Apr 30 00:35:39.240758 sshd[2264]: pam_unix(sshd:session): session closed for user core Apr 30 00:35:39.244517 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:40092.service: Deactivated successfully. Apr 30 00:35:39.246036 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:35:39.246750 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:35:39.247710 systemd-logind[1694]: Removed session 8. Apr 30 00:35:39.325567 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:58614.service - OpenSSH per-connection server daemon (10.200.16.10:58614). Apr 30 00:35:39.798612 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 58614 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:35:39.800041 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:35:39.804299 systemd-logind[1694]: New session 9 of user core. Apr 30 00:35:39.811510 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:35:40.067287 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:35:40.067567 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:35:41.068763 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:35:41.069725 (dockerd)[2315]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:35:41.873302 dockerd[2315]: time="2025-04-30T00:35:41.872961255Z" level=info msg="Starting up" Apr 30 00:35:42.270209 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3843255921-merged.mount: Deactivated successfully. Apr 30 00:35:42.506412 dockerd[2315]: time="2025-04-30T00:35:42.506355552Z" level=info msg="Loading containers: start." Apr 30 00:35:42.713388 kernel: Initializing XFRM netlink socket Apr 30 00:35:42.866577 systemd-networkd[1346]: docker0: Link UP Apr 30 00:35:42.897454 dockerd[2315]: time="2025-04-30T00:35:42.897407601Z" level=info msg="Loading containers: done." Apr 30 00:35:43.267252 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2797876203-merged.mount: Deactivated successfully. Apr 30 00:35:43.313294 dockerd[2315]: time="2025-04-30T00:35:43.313218585Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:35:43.313389 dockerd[2315]: time="2025-04-30T00:35:43.313356665Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:35:43.313525 dockerd[2315]: time="2025-04-30T00:35:43.313491466Z" level=info msg="Daemon has completed initialization" Apr 30 00:35:43.566487 dockerd[2315]: time="2025-04-30T00:35:43.566305895Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:35:43.566707 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:35:44.404465 containerd[1728]: time="2025-04-30T00:35:44.404423997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:35:45.402951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:35:45.408493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:45.507333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:45.518598 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:45.578063 kubelet[2460]: E0430 00:35:45.578006 2460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:45.580180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:45.580342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:48.531149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560461661.mount: Deactivated successfully. Apr 30 00:35:50.424417 containerd[1728]: time="2025-04-30T00:35:50.424358979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:50.427285 containerd[1728]: time="2025-04-30T00:35:50.427222225Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" Apr 30 00:35:50.430375 containerd[1728]: time="2025-04-30T00:35:50.430334352Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:50.435660 containerd[1728]: time="2025-04-30T00:35:50.435602922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:50.436909 containerd[1728]: time="2025-04-30T00:35:50.436734085Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 6.032269888s" Apr 30 00:35:50.436909 containerd[1728]: time="2025-04-30T00:35:50.436771685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:35:50.437532 containerd[1728]: time="2025-04-30T00:35:50.437365806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:35:52.451820 containerd[1728]: time="2025-04-30T00:35:52.451764304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:52.456037 containerd[1728]: time="2025-04-30T00:35:52.455801672Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" Apr 30 00:35:52.460354 containerd[1728]: time="2025-04-30T00:35:52.460328162Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:52.469965 containerd[1728]: time="2025-04-30T00:35:52.469916021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:52.471289 containerd[1728]: time="2025-04-30T00:35:52.471146464Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 2.033748178s" Apr 30 00:35:52.471289 containerd[1728]: time="2025-04-30T00:35:52.471180904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:35:52.471929 containerd[1728]: time="2025-04-30T00:35:52.471848985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:35:54.616757 containerd[1728]: time="2025-04-30T00:35:54.616698629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:54.625115 containerd[1728]: time="2025-04-30T00:35:54.625076766Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" Apr 30 00:35:54.633170 containerd[1728]: time="2025-04-30T00:35:54.633117622Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:54.641480 containerd[1728]: time="2025-04-30T00:35:54.641414519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:54.642535 containerd[1728]: time="2025-04-30T00:35:54.642492201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 2.170608056s" Apr 30 00:35:54.642535 containerd[1728]: time="2025-04-30T00:35:54.642532521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:35:54.643711 containerd[1728]: time="2025-04-30T00:35:54.643547083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:35:55.652977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:35:55.661464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:35:55.764195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:35:55.775552 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:35:55.814516 kubelet[2537]: E0430 00:35:55.814440 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:35:55.816009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:35:55.816253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:35:56.579586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005726266.mount: Deactivated successfully. Apr 30 00:35:56.988993 containerd[1728]: time="2025-04-30T00:35:56.988317122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:56.993063 containerd[1728]: time="2025-04-30T00:35:56.993024691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" Apr 30 00:35:56.998825 containerd[1728]: time="2025-04-30T00:35:56.998782661Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:57.007688 containerd[1728]: time="2025-04-30T00:35:57.007625078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:57.008851 containerd[1728]: time="2025-04-30T00:35:57.008470040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.364892796s" Apr 30 00:35:57.008851 containerd[1728]: time="2025-04-30T00:35:57.008501760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:35:57.008944 containerd[1728]: time="2025-04-30T00:35:57.008889040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:35:57.881039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347134691.mount: Deactivated successfully. Apr 30 00:35:59.727665 containerd[1728]: time="2025-04-30T00:35:59.727602807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:59.734530 containerd[1728]: time="2025-04-30T00:35:59.734469580Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Apr 30 00:35:59.735671 containerd[1728]: time="2025-04-30T00:35:59.735616062Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:59.741719 containerd[1728]: time="2025-04-30T00:35:59.741656193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:35:59.742981 containerd[1728]: time="2025-04-30T00:35:59.742934596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.734020156s" Apr 30 00:35:59.743043 containerd[1728]: time="2025-04-30T00:35:59.742982076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:35:59.743759 containerd[1728]: time="2025-04-30T00:35:59.743459957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:36:00.439500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993503526.mount: Deactivated successfully. Apr 30 00:36:00.477462 containerd[1728]: time="2025-04-30T00:36:00.477387810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:00.490286 containerd[1728]: time="2025-04-30T00:36:00.490231714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 30 00:36:00.496375 containerd[1728]: time="2025-04-30T00:36:00.496325045Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:00.502544 containerd[1728]: time="2025-04-30T00:36:00.502483537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:00.503321 containerd[1728]: time="2025-04-30T00:36:00.503141898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 759.650981ms" Apr 30 00:36:00.503321 containerd[1728]: time="2025-04-30T00:36:00.503177138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:36:00.504178 containerd[1728]: time="2025-04-30T00:36:00.503783179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:36:01.354783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289503277.mount: Deactivated successfully. Apr 30 00:36:05.803347 containerd[1728]: time="2025-04-30T00:36:05.803284601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:05.807535 containerd[1728]: time="2025-04-30T00:36:05.807171009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Apr 30 00:36:05.816773 containerd[1728]: time="2025-04-30T00:36:05.816697868Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:05.828807 containerd[1728]: time="2025-04-30T00:36:05.828737093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:05.830168 containerd[1728]: time="2025-04-30T00:36:05.830014895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.326172876s" Apr 30 00:36:05.830168 containerd[1728]: time="2025-04-30T00:36:05.830059455Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:36:05.903075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:36:05.909500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:06.044955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:06.056565 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:06.098580 kubelet[2677]: E0430 00:36:06.098497 2677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:06.101415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:06.101709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:12.150901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:12.157550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:12.193203 systemd[1]: Reloading requested from client PID 2704 ('systemctl') (unit session-9.scope)... Apr 30 00:36:12.193230 systemd[1]: Reloading... Apr 30 00:36:12.313351 zram_generator::config[2744]: No configuration found. Apr 30 00:36:12.433322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:12.512641 systemd[1]: Reloading finished in 318 ms. Apr 30 00:36:12.561287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:12.566500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:12.568525 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:36:12.568862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:12.576529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:12.695065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:12.702887 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:36:12.742745 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:36:12.743077 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:36:12.743120 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:36:12.743294 kubelet[2813]: I0430 00:36:12.743248 2813 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:36:13.586105 kubelet[2813]: I0430 00:36:13.586058 2813 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:36:13.587310 kubelet[2813]: I0430 00:36:13.586257 2813 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:36:13.587310 kubelet[2813]: I0430 00:36:13.586566 2813 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:36:13.606186 kubelet[2813]: E0430 00:36:13.606126 2813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:13.608621 kubelet[2813]: I0430 00:36:13.608407 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:36:13.618150 kubelet[2813]: E0430 00:36:13.618086 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:36:13.618150 kubelet[2813]: I0430 00:36:13.618143 2813 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:36:13.622762 kubelet[2813]: I0430 00:36:13.622714 2813 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:36:13.623780 kubelet[2813]: I0430 00:36:13.623719 2813 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:36:13.624016 kubelet[2813]: I0430 00:36:13.623782 2813 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-cee67ba5b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:36:13.624154 kubelet[2813]: I0430 00:36:13.624033 2813 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:36:13.624154 kubelet[2813]: I0430 00:36:13.624043 2813 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:36:13.624273 kubelet[2813]: I0430 00:36:13.624222 2813 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:36:13.627491 kubelet[2813]: I0430 00:36:13.627454 2813 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:36:13.627491 kubelet[2813]: I0430 00:36:13.627493 2813 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:36:13.627602 kubelet[2813]: I0430 00:36:13.627523 2813 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:36:13.627602 kubelet[2813]: I0430 00:36:13.627535 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:36:13.634170 kubelet[2813]: W0430 00:36:13.633524 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:13.634170 kubelet[2813]: E0430 00:36:13.633622 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:13.634170 kubelet[2813]: W0430 00:36:13.634084 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-cee67ba5b3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:13.634170 kubelet[2813]: E0430 00:36:13.634122 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-cee67ba5b3&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:13.635124 kubelet[2813]: I0430 00:36:13.635097 2813 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:36:13.636052 kubelet[2813]: I0430 00:36:13.636017 2813 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:36:13.636405 kubelet[2813]: W0430 00:36:13.636370 2813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:36:13.637780 kubelet[2813]: I0430 00:36:13.637569 2813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:36:13.637780 kubelet[2813]: I0430 00:36:13.637621 2813 server.go:1287] "Started kubelet" Apr 30 00:36:13.640332 kubelet[2813]: I0430 00:36:13.640252 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:36:13.641783 kubelet[2813]: I0430 00:36:13.641105 2813 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:36:13.642302 kubelet[2813]: I0430 00:36:13.642250 2813 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:36:13.643946 kubelet[2813]: I0430 00:36:13.643841 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:36:13.644227 kubelet[2813]: I0430 00:36:13.644194 2813 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:36:13.647587 kubelet[2813]: I0430 00:36:13.647556 2813 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:36:13.647860 kubelet[2813]: I0430 00:36:13.647788 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:36:13.648178 kubelet[2813]: E0430 00:36:13.648148 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:13.653220 kubelet[2813]: I0430 00:36:13.653124 2813 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:36:13.654569 kubelet[2813]: I0430 00:36:13.653510 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:36:13.654569 kubelet[2813]: E0430 00:36:13.654366 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-cee67ba5b3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Apr 30 00:36:13.654792 kubelet[2813]: E0430 00:36:13.654496 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-cee67ba5b3.183af18862f588bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-cee67ba5b3,UID:ci-4081.3.3-a-cee67ba5b3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-cee67ba5b3,},FirstTimestamp:2025-04-30 00:36:13.637593275 +0000 UTC m=+0.930876073,LastTimestamp:2025-04-30 00:36:13.637593275 +0000 UTC m=+0.930876073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-cee67ba5b3,}" Apr 30 00:36:13.655092 kubelet[2813]: I0430 00:36:13.655059 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:36:13.655378 kubelet[2813]: I0430 00:36:13.655352 2813 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:36:13.658329 kubelet[2813]: W0430 00:36:13.658158 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:13.658490 kubelet[2813]: E0430 00:36:13.658332 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:13.658661 kubelet[2813]: I0430 00:36:13.658557 2813 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:36:13.735011 kubelet[2813]: I0430 00:36:13.734804 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:36:13.737526 kubelet[2813]: I0430 00:36:13.737256 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:36:13.738093 kubelet[2813]: I0430 00:36:13.737690 2813 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:36:13.738093 kubelet[2813]: I0430 00:36:13.737726 2813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:36:13.738093 kubelet[2813]: I0430 00:36:13.737745 2813 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:36:13.738093 kubelet[2813]: E0430 00:36:13.737803 2813 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:36:13.740242 kubelet[2813]: W0430 00:36:13.740198 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:13.742522 kubelet[2813]: E0430 00:36:13.742483 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:13.748520 kubelet[2813]: E0430 00:36:13.748472 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:13.839317 kubelet[2813]: E0430 00:36:13.838222 2813 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:36:13.849438 kubelet[2813]: E0430 00:36:13.849397 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:13.854894 kubelet[2813]: E0430 00:36:13.854853 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-cee67ba5b3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Apr 30 00:36:13.950316 kubelet[2813]: E0430 00:36:13.950277 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:13.967558 kubelet[2813]: I0430 00:36:13.967524 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:36:13.967558 kubelet[2813]: I0430 00:36:13.967545 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:36:13.967558 kubelet[2813]: I0430 00:36:13.967566 2813 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:36:13.976468 kubelet[2813]: I0430 00:36:13.976425 2813 policy_none.go:49] "None policy: Start" Apr 30 00:36:13.976468 kubelet[2813]: I0430 00:36:13.976467 2813 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:36:13.976468 kubelet[2813]: I0430 00:36:13.976480 2813 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:36:13.991899 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:36:14.005349 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:36:14.008790 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:36:14.017748 kubelet[2813]: I0430 00:36:14.017165 2813 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:36:14.017748 kubelet[2813]: I0430 00:36:14.017414 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:36:14.017748 kubelet[2813]: I0430 00:36:14.017427 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:36:14.017748 kubelet[2813]: I0430 00:36:14.017676 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:36:14.020585 kubelet[2813]: E0430 00:36:14.020542 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:36:14.020728 kubelet[2813]: E0430 00:36:14.020596 2813 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:14.049583 systemd[1]: Created slice kubepods-burstable-pod96cb00233e7f1ec7a23faacd7d3d51de.slice - libcontainer container kubepods-burstable-pod96cb00233e7f1ec7a23faacd7d3d51de.slice. Apr 30 00:36:14.057238 kubelet[2813]: I0430 00:36:14.057188 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057238 kubelet[2813]: I0430 00:36:14.057237 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057238 kubelet[2813]: I0430 00:36:14.057284 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057238 kubelet[2813]: I0430 00:36:14.057305 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057238 kubelet[2813]: I0430 00:36:14.057356 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057707 kubelet[2813]: I0430 00:36:14.057392 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057707 kubelet[2813]: I0430 00:36:14.057414 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48aadcb5d516940428cbc66e559dd25b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-cee67ba5b3\" (UID: \"48aadcb5d516940428cbc66e559dd25b\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057707 kubelet[2813]: I0430 00:36:14.057431 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.057707 kubelet[2813]: I0430 00:36:14.057448 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.068853 kubelet[2813]: E0430 00:36:14.068720 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.072499 systemd[1]: Created slice kubepods-burstable-pod48aadcb5d516940428cbc66e559dd25b.slice - libcontainer container kubepods-burstable-pod48aadcb5d516940428cbc66e559dd25b.slice. Apr 30 00:36:14.082087 kubelet[2813]: E0430 00:36:14.082009 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.085375 systemd[1]: Created slice kubepods-burstable-pod921d381d150cadc6989097be34d26eea.slice - libcontainer container kubepods-burstable-pod921d381d150cadc6989097be34d26eea.slice. Apr 30 00:36:14.087849 kubelet[2813]: E0430 00:36:14.087812 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.120148 kubelet[2813]: I0430 00:36:14.120031 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.121430 kubelet[2813]: E0430 00:36:14.121378 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.255849 kubelet[2813]: E0430 00:36:14.255798 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-cee67ba5b3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Apr 30 00:36:14.323822 kubelet[2813]: I0430 00:36:14.323765 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.324176 kubelet[2813]: E0430 00:36:14.324148 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.370602 containerd[1728]: time="2025-04-30T00:36:14.370424754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-cee67ba5b3,Uid:96cb00233e7f1ec7a23faacd7d3d51de,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:14.383454 containerd[1728]: time="2025-04-30T00:36:14.383120614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-cee67ba5b3,Uid:48aadcb5d516940428cbc66e559dd25b,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:14.389027 containerd[1728]: time="2025-04-30T00:36:14.388803663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-cee67ba5b3,Uid:921d381d150cadc6989097be34d26eea,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:14.668776 kubelet[2813]: W0430 00:36:14.668694 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:14.668983 kubelet[2813]: E0430 00:36:14.668798 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:14.727106 kubelet[2813]: I0430 00:36:14.727058 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.727619 kubelet[2813]: E0430 00:36:14.727564 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:14.950953 kubelet[2813]: W0430 00:36:14.950744 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-cee67ba5b3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:14.950953 kubelet[2813]: E0430 00:36:14.950818 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-cee67ba5b3&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:15.006789 kubelet[2813]: W0430 00:36:15.006730 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:15.006926 kubelet[2813]: E0430 00:36:15.006799 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:15.057053 kubelet[2813]: E0430 00:36:15.056989 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-cee67ba5b3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Apr 30 00:36:15.283354 kubelet[2813]: W0430 00:36:15.283155 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Apr 30 00:36:15.283354 kubelet[2813]: E0430 00:36:15.283211 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:15.530126 kubelet[2813]: I0430 00:36:15.530081 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:15.530560 kubelet[2813]: E0430 00:36:15.530534 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:15.565861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695824435.mount: Deactivated successfully. Apr 30 00:36:15.616303 containerd[1728]: time="2025-04-30T00:36:15.615479162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:36:15.619085 containerd[1728]: time="2025-04-30T00:36:15.619038088Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:36:15.622333 containerd[1728]: time="2025-04-30T00:36:15.622295093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:36:15.626805 containerd[1728]: time="2025-04-30T00:36:15.626051819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:36:15.630234 containerd[1728]: time="2025-04-30T00:36:15.630192706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:36:15.635757 containerd[1728]: time="2025-04-30T00:36:15.634837393Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:36:15.641384 containerd[1728]: time="2025-04-30T00:36:15.641336443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:36:15.649409 containerd[1728]: time="2025-04-30T00:36:15.649358936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:36:15.650664 containerd[1728]: time="2025-04-30T00:36:15.650182537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.261298314s" Apr 30 00:36:15.651945 containerd[1728]: time="2025-04-30T00:36:15.651909940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.281399066s" Apr 30 00:36:15.669424 containerd[1728]: time="2025-04-30T00:36:15.669373008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.286170674s" Apr 30 00:36:15.745712 kubelet[2813]: E0430 00:36:15.745646 2813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:36:16.354051 containerd[1728]: time="2025-04-30T00:36:16.353933810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:16.354402 containerd[1728]: time="2025-04-30T00:36:16.354115770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:16.355745 containerd[1728]: time="2025-04-30T00:36:16.355616213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.358370 containerd[1728]: time="2025-04-30T00:36:16.356642774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.361178 containerd[1728]: time="2025-04-30T00:36:16.360688301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:16.361178 containerd[1728]: time="2025-04-30T00:36:16.360765701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:16.361439 containerd[1728]: time="2025-04-30T00:36:16.360994141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:16.361439 containerd[1728]: time="2025-04-30T00:36:16.361031261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:16.361439 containerd[1728]: time="2025-04-30T00:36:16.361047981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.361439 containerd[1728]: time="2025-04-30T00:36:16.361116821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.361571 containerd[1728]: time="2025-04-30T00:36:16.360781781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.361571 containerd[1728]: time="2025-04-30T00:36:16.360997421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:16.392578 systemd[1]: Started cri-containerd-63799d24a4d926becfcbe8a8d0bc33b1d4940a4360e95efe679a9a110e40e519.scope - libcontainer container 63799d24a4d926becfcbe8a8d0bc33b1d4940a4360e95efe679a9a110e40e519. Apr 30 00:36:16.395004 systemd[1]: Started cri-containerd-fa4bcba142440ce5057401ab2471ba1a06dc3bc180009ccf7841ea6c5e0384cf.scope - libcontainer container fa4bcba142440ce5057401ab2471ba1a06dc3bc180009ccf7841ea6c5e0384cf. Apr 30 00:36:16.400447 systemd[1]: Started cri-containerd-8920d93b931381c3647b62c1a9c2ad345df8a200c66e6b3c57e616f52ea48c4c.scope - libcontainer container 8920d93b931381c3647b62c1a9c2ad345df8a200c66e6b3c57e616f52ea48c4c. Apr 30 00:36:16.455066 containerd[1728]: time="2025-04-30T00:36:16.454567089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-cee67ba5b3,Uid:921d381d150cadc6989097be34d26eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"63799d24a4d926becfcbe8a8d0bc33b1d4940a4360e95efe679a9a110e40e519\"" Apr 30 00:36:16.459723 containerd[1728]: time="2025-04-30T00:36:16.459576017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-cee67ba5b3,Uid:96cb00233e7f1ec7a23faacd7d3d51de,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa4bcba142440ce5057401ab2471ba1a06dc3bc180009ccf7841ea6c5e0384cf\"" Apr 30 00:36:16.460333 containerd[1728]: time="2025-04-30T00:36:16.460279258Z" level=info msg="CreateContainer within sandbox \"63799d24a4d926becfcbe8a8d0bc33b1d4940a4360e95efe679a9a110e40e519\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:36:16.463130 containerd[1728]: time="2025-04-30T00:36:16.463006822Z" level=info msg="CreateContainer within sandbox \"fa4bcba142440ce5057401ab2471ba1a06dc3bc180009ccf7841ea6c5e0384cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:36:16.464441 containerd[1728]: time="2025-04-30T00:36:16.464363305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-cee67ba5b3,Uid:48aadcb5d516940428cbc66e559dd25b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8920d93b931381c3647b62c1a9c2ad345df8a200c66e6b3c57e616f52ea48c4c\"" Apr 30 00:36:16.467962 containerd[1728]: time="2025-04-30T00:36:16.467865670Z" level=info msg="CreateContainer within sandbox \"8920d93b931381c3647b62c1a9c2ad345df8a200c66e6b3c57e616f52ea48c4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:36:16.538980 containerd[1728]: time="2025-04-30T00:36:16.538924223Z" level=info msg="CreateContainer within sandbox \"63799d24a4d926becfcbe8a8d0bc33b1d4940a4360e95efe679a9a110e40e519\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4f2a40c09cbb961ba8a78bbeaf93acf46c04dfc5e913c36299452a1e684a4aa\"" Apr 30 00:36:16.539791 containerd[1728]: time="2025-04-30T00:36:16.539730864Z" level=info msg="StartContainer for \"f4f2a40c09cbb961ba8a78bbeaf93acf46c04dfc5e913c36299452a1e684a4aa\"" Apr 30 00:36:16.559072 containerd[1728]: time="2025-04-30T00:36:16.558996334Z" level=info msg="CreateContainer within sandbox \"8920d93b931381c3647b62c1a9c2ad345df8a200c66e6b3c57e616f52ea48c4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cc635158e6f8eab292cab99a0d2d40de611be58f410fb9a008e5e87fb4fb65a\"" Apr 30 00:36:16.561606 containerd[1728]: time="2025-04-30T00:36:16.561463098Z" level=info msg="CreateContainer within sandbox \"fa4bcba142440ce5057401ab2471ba1a06dc3bc180009ccf7841ea6c5e0384cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d17f55832d14ea1c9f57531967ba17787251fe3c4972bb2c7cd3b003c24488f\"" Apr 30 00:36:16.564401 containerd[1728]: time="2025-04-30T00:36:16.562593180Z" level=info msg="StartContainer for \"6d17f55832d14ea1c9f57531967ba17787251fe3c4972bb2c7cd3b003c24488f\"" Apr 30 00:36:16.564401 containerd[1728]: time="2025-04-30T00:36:16.562625700Z" level=info msg="StartContainer for \"0cc635158e6f8eab292cab99a0d2d40de611be58f410fb9a008e5e87fb4fb65a\"" Apr 30 00:36:16.573533 systemd[1]: Started cri-containerd-f4f2a40c09cbb961ba8a78bbeaf93acf46c04dfc5e913c36299452a1e684a4aa.scope - libcontainer container f4f2a40c09cbb961ba8a78bbeaf93acf46c04dfc5e913c36299452a1e684a4aa. Apr 30 00:36:16.615459 systemd[1]: Started cri-containerd-6d17f55832d14ea1c9f57531967ba17787251fe3c4972bb2c7cd3b003c24488f.scope - libcontainer container 6d17f55832d14ea1c9f57531967ba17787251fe3c4972bb2c7cd3b003c24488f. Apr 30 00:36:16.625599 systemd[1]: Started cri-containerd-0cc635158e6f8eab292cab99a0d2d40de611be58f410fb9a008e5e87fb4fb65a.scope - libcontainer container 0cc635158e6f8eab292cab99a0d2d40de611be58f410fb9a008e5e87fb4fb65a. Apr 30 00:36:16.650802 containerd[1728]: time="2025-04-30T00:36:16.650758959Z" level=info msg="StartContainer for \"f4f2a40c09cbb961ba8a78bbeaf93acf46c04dfc5e913c36299452a1e684a4aa\" returns successfully" Apr 30 00:36:16.658491 kubelet[2813]: E0430 00:36:16.658447 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-cee67ba5b3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="3.2s" Apr 30 00:36:16.692429 containerd[1728]: time="2025-04-30T00:36:16.692208905Z" level=info msg="StartContainer for \"6d17f55832d14ea1c9f57531967ba17787251fe3c4972bb2c7cd3b003c24488f\" returns successfully" Apr 30 00:36:16.692429 containerd[1728]: time="2025-04-30T00:36:16.692208985Z" level=info msg="StartContainer for \"0cc635158e6f8eab292cab99a0d2d40de611be58f410fb9a008e5e87fb4fb65a\" returns successfully" Apr 30 00:36:16.755602 kubelet[2813]: E0430 00:36:16.754087 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:16.759394 kubelet[2813]: E0430 00:36:16.759365 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:16.761995 kubelet[2813]: E0430 00:36:16.761852 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:17.135340 kubelet[2813]: I0430 00:36:17.133695 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:17.765666 kubelet[2813]: E0430 00:36:17.765155 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:17.765666 kubelet[2813]: E0430 00:36:17.765524 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.226943 kubelet[2813]: I0430 00:36:19.226838 2813 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.226943 kubelet[2813]: E0430 00:36:19.226882 2813 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-a-cee67ba5b3\": node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:19.250670 kubelet[2813]: I0430 00:36:19.250403 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.283089 kubelet[2813]: E0430 00:36:19.282798 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.283089 kubelet[2813]: I0430 00:36:19.282836 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.287794 kubelet[2813]: E0430 00:36:19.287742 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.287794 kubelet[2813]: I0430 00:36:19.287781 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.290719 kubelet[2813]: E0430 00:36:19.290677 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-cee67ba5b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.595078 kubelet[2813]: I0430 00:36:19.594227 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.598900 kubelet[2813]: E0430 00:36:19.598677 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-cee67ba5b3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:19.633356 kubelet[2813]: I0430 00:36:19.633322 2813 apiserver.go:52] "Watching apiserver" Apr 30 00:36:19.655658 kubelet[2813]: I0430 00:36:19.655607 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:36:21.705752 systemd[1]: Reloading requested from client PID 3084 ('systemctl') (unit session-9.scope)... Apr 30 00:36:22.819380 zram_generator::config[3125]: No configuration found. Apr 30 00:36:22.819539 kubelet[2813]: I0430 00:36:21.965190 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:22.819539 kubelet[2813]: W0430 00:36:21.972879 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:36:21.705773 systemd[1]: Reloading... Apr 30 00:36:21.914339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:22.011171 systemd[1]: Reloading finished in 305 ms. Apr 30 00:36:22.046197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:22.055840 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:36:22.056044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:22.056100 systemd[1]: kubelet.service: Consumed 1.353s CPU time, 121.9M memory peak, 0B memory swap peak. Apr 30 00:36:22.065695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:22.910582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:22.923744 (kubelet)[3188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:36:22.964834 kubelet[3188]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:36:22.964834 kubelet[3188]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:36:22.964834 kubelet[3188]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:36:22.965317 kubelet[3188]: I0430 00:36:22.965219 3188 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:36:22.976169 kubelet[3188]: I0430 00:36:22.976109 3188 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:36:22.976474 kubelet[3188]: I0430 00:36:22.976257 3188 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:36:22.978414 kubelet[3188]: I0430 00:36:22.978380 3188 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:36:22.979796 kubelet[3188]: I0430 00:36:22.979769 3188 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:36:22.985637 kubelet[3188]: I0430 00:36:22.985176 3188 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:36:22.988204 kubelet[3188]: E0430 00:36:22.988166 3188 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:36:22.988204 kubelet[3188]: I0430 00:36:22.988204 3188 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:36:22.992635 kubelet[3188]: I0430 00:36:22.992595 3188 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:36:22.993077 kubelet[3188]: I0430 00:36:22.993018 3188 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:36:22.993316 kubelet[3188]: I0430 00:36:22.993056 3188 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-cee67ba5b3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:36:22.993439 kubelet[3188]: I0430 00:36:22.993327 3188 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:36:22.993439 kubelet[3188]: I0430 00:36:22.993337 3188 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:36:22.993439 kubelet[3188]: I0430 00:36:22.993383 3188 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:36:22.993539 kubelet[3188]: I0430 00:36:22.993521 3188 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:36:22.993566 kubelet[3188]: I0430 00:36:22.993539 3188 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:36:22.994001 kubelet[3188]: I0430 00:36:22.993982 3188 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:36:22.994001 kubelet[3188]: I0430 00:36:22.994003 3188 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:36:23.004340 kubelet[3188]: I0430 00:36:22.999527 3188 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:36:23.004340 kubelet[3188]: I0430 00:36:23.000144 3188 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:36:23.004340 kubelet[3188]: I0430 00:36:23.001181 3188 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:36:23.004340 kubelet[3188]: I0430 00:36:23.001254 3188 server.go:1287] "Started kubelet" Apr 30 00:36:23.004537 kubelet[3188]: I0430 00:36:23.004355 3188 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:36:23.010096 kubelet[3188]: I0430 00:36:23.009954 3188 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:36:23.013277 kubelet[3188]: I0430 00:36:23.011097 3188 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:36:23.014503 kubelet[3188]: I0430 00:36:23.014457 3188 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:36:23.014788 kubelet[3188]: I0430 00:36:23.014774 3188 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:36:23.015091 kubelet[3188]: I0430 00:36:23.015075 3188 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:36:23.018269 kubelet[3188]: I0430 00:36:23.016208 3188 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:36:23.018657 kubelet[3188]: E0430 00:36:23.018637 3188 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-cee67ba5b3\" not found" Apr 30 00:36:23.034466 kubelet[3188]: I0430 00:36:23.033720 3188 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:36:23.034770 kubelet[3188]: I0430 00:36:23.034755 3188 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:36:23.045411 kubelet[3188]: I0430 00:36:23.045359 3188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:36:23.047412 kubelet[3188]: I0430 00:36:23.047381 3188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:36:23.047577 kubelet[3188]: I0430 00:36:23.047566 3188 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:36:23.047637 kubelet[3188]: I0430 00:36:23.047629 3188 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:36:23.047725 kubelet[3188]: I0430 00:36:23.047715 3188 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:36:23.047846 kubelet[3188]: E0430 00:36:23.047826 3188 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:36:23.069479 kubelet[3188]: I0430 00:36:23.069437 3188 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:36:23.070977 kubelet[3188]: I0430 00:36:23.069810 3188 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:36:23.071560 kubelet[3188]: I0430 00:36:23.071372 3188 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:36:23.091866 kubelet[3188]: E0430 00:36:23.071748 3188 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:36:23.126281 kubelet[3188]: I0430 00:36:23.126207 3188 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:36:23.126463 kubelet[3188]: I0430 00:36:23.126249 3188 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:36:23.126463 kubelet[3188]: I0430 00:36:23.126353 3188 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:36:23.126762 kubelet[3188]: I0430 00:36:23.126727 3188 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:36:23.126801 kubelet[3188]: I0430 00:36:23.126757 3188 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:36:23.126801 kubelet[3188]: I0430 00:36:23.126782 3188 policy_none.go:49] "None policy: Start" Apr 30 00:36:23.126801 kubelet[3188]: I0430 00:36:23.126795 3188 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:36:23.126924 kubelet[3188]: I0430 00:36:23.126805 3188 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:36:23.126957 kubelet[3188]: I0430 00:36:23.126931 3188 state_mem.go:75] "Updated machine memory state" Apr 30 00:36:23.132145 kubelet[3188]: I0430 00:36:23.132107 3188 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:36:23.132367 kubelet[3188]: I0430 00:36:23.132344 3188 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:36:23.132414 kubelet[3188]: I0430 00:36:23.132365 3188 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:36:23.134333 kubelet[3188]: I0430 00:36:23.133117 3188 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:36:23.137168 kubelet[3188]: E0430 00:36:23.135167 3188 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:36:23.148897 kubelet[3188]: I0430 00:36:23.148806 3188 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.149601 kubelet[3188]: I0430 00:36:23.149479 3188 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.150333 kubelet[3188]: I0430 00:36:23.150300 3188 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.162853 kubelet[3188]: W0430 00:36:23.162802 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:36:23.163070 kubelet[3188]: W0430 00:36:23.162883 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:36:23.164790 kubelet[3188]: W0430 00:36:23.164114 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:36:23.164790 kubelet[3188]: E0430 00:36:23.164187 3188 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235676 kubelet[3188]: I0430 00:36:23.235460 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235676 kubelet[3188]: I0430 00:36:23.235494 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235676 kubelet[3188]: I0430 00:36:23.235515 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235676 kubelet[3188]: I0430 00:36:23.235532 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235676 kubelet[3188]: I0430 00:36:23.235548 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235907 kubelet[3188]: I0430 00:36:23.235563 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235907 kubelet[3188]: I0430 00:36:23.235578 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/96cb00233e7f1ec7a23faacd7d3d51de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-cee67ba5b3\" (UID: \"96cb00233e7f1ec7a23faacd7d3d51de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235907 kubelet[3188]: I0430 00:36:23.235592 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48aadcb5d516940428cbc66e559dd25b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-cee67ba5b3\" (UID: \"48aadcb5d516940428cbc66e559dd25b\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.235907 kubelet[3188]: I0430 00:36:23.235608 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921d381d150cadc6989097be34d26eea-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" (UID: \"921d381d150cadc6989097be34d26eea\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.243302 kubelet[3188]: I0430 00:36:23.243020 3188 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.254681 kubelet[3188]: I0430 00:36:23.254414 3188 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.254681 kubelet[3188]: I0430 00:36:23.254504 3188 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:23.999060 kubelet[3188]: I0430 00:36:23.998818 3188 apiserver.go:52] "Watching apiserver" Apr 30 00:36:24.035373 kubelet[3188]: I0430 00:36:24.035338 3188 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:36:24.105801 kubelet[3188]: I0430 00:36:24.105331 3188 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:24.114897 kubelet[3188]: W0430 00:36:24.114549 3188 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:36:24.114897 kubelet[3188]: E0430 00:36:24.114690 3188 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-cee67ba5b3\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" Apr 30 00:36:24.125514 kubelet[3188]: I0430 00:36:24.125422 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-cee67ba5b3" podStartSLOduration=3.12537408 podStartE2EDuration="3.12537408s" podCreationTimestamp="2025-04-30 00:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:36:24.12536644 +0000 UTC m=+1.197440027" watchObservedRunningTime="2025-04-30 00:36:24.12537408 +0000 UTC m=+1.197447667" Apr 30 00:36:24.147323 kubelet[3188]: I0430 00:36:24.146942 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-cee67ba5b3" podStartSLOduration=1.146920985 podStartE2EDuration="1.146920985s" podCreationTimestamp="2025-04-30 00:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:36:24.136412293 +0000 UTC m=+1.208485920" watchObservedRunningTime="2025-04-30 00:36:24.146920985 +0000 UTC m=+1.218994572" Apr 30 00:36:24.159560 kubelet[3188]: I0430 00:36:24.159498 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-cee67ba5b3" podStartSLOduration=1.159482559 podStartE2EDuration="1.159482559s" podCreationTimestamp="2025-04-30 00:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:36:24.147535105 +0000 UTC m=+1.219608692" watchObservedRunningTime="2025-04-30 00:36:24.159482559 +0000 UTC m=+1.231556106" Apr 30 00:36:28.951700 kubelet[3188]: I0430 00:36:28.951497 3188 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:36:28.952183 containerd[1728]: time="2025-04-30T00:36:28.951905993Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:36:28.956297 kubelet[3188]: I0430 00:36:28.954935 3188 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:36:29.420924 systemd[1]: Created slice kubepods-besteffort-pod8a969d8b_bce4_40ab_b74b_b750c83fe3ca.slice - libcontainer container kubepods-besteffort-pod8a969d8b_bce4_40ab_b74b_b750c83fe3ca.slice. Apr 30 00:36:29.478358 kubelet[3188]: I0430 00:36:29.477869 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-kube-proxy\") pod \"kube-proxy-rsf7q\" (UID: \"8a969d8b-bce4-40ab-b74b-b750c83fe3ca\") " pod="kube-system/kube-proxy-rsf7q" Apr 30 00:36:29.478358 kubelet[3188]: I0430 00:36:29.478015 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-lib-modules\") pod \"kube-proxy-rsf7q\" (UID: \"8a969d8b-bce4-40ab-b74b-b750c83fe3ca\") " pod="kube-system/kube-proxy-rsf7q" Apr 30 00:36:29.478358 kubelet[3188]: I0430 00:36:29.478043 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8spz\" (UniqueName: \"kubernetes.io/projected/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-kube-api-access-l8spz\") pod \"kube-proxy-rsf7q\" (UID: \"8a969d8b-bce4-40ab-b74b-b750c83fe3ca\") " pod="kube-system/kube-proxy-rsf7q" Apr 30 00:36:29.478358 kubelet[3188]: I0430 00:36:29.478071 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-xtables-lock\") pod \"kube-proxy-rsf7q\" (UID: \"8a969d8b-bce4-40ab-b74b-b750c83fe3ca\") " pod="kube-system/kube-proxy-rsf7q" Apr 30 00:36:29.588578 kubelet[3188]: E0430 00:36:29.588533 3188 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 00:36:29.588578 kubelet[3188]: E0430 00:36:29.588576 3188 projected.go:194] Error preparing data for projected volume kube-api-access-l8spz for pod kube-system/kube-proxy-rsf7q: configmap "kube-root-ca.crt" not found Apr 30 00:36:29.588756 kubelet[3188]: E0430 00:36:29.588650 3188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-kube-api-access-l8spz podName:8a969d8b-bce4-40ab-b74b-b750c83fe3ca nodeName:}" failed. No retries permitted until 2025-04-30 00:36:30.088628549 +0000 UTC m=+7.160702136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8spz" (UniqueName: "kubernetes.io/projected/8a969d8b-bce4-40ab-b74b-b750c83fe3ca-kube-api-access-l8spz") pod "kube-proxy-rsf7q" (UID: "8a969d8b-bce4-40ab-b74b-b750c83fe3ca") : configmap "kube-root-ca.crt" not found Apr 30 00:36:30.038980 systemd[1]: Created slice kubepods-besteffort-pod62e7897b_3cc4_4659_a409_e2c9e206342d.slice - libcontainer container kubepods-besteffort-pod62e7897b_3cc4_4659_a409_e2c9e206342d.slice. Apr 30 00:36:30.083554 kubelet[3188]: I0430 00:36:30.083505 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62e7897b-3cc4-4659-a409-e2c9e206342d-var-lib-calico\") pod \"tigera-operator-789496d6f5-pjbpn\" (UID: \"62e7897b-3cc4-4659-a409-e2c9e206342d\") " pod="tigera-operator/tigera-operator-789496d6f5-pjbpn" Apr 30 00:36:30.084742 kubelet[3188]: I0430 00:36:30.084112 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkx4\" (UniqueName: \"kubernetes.io/projected/62e7897b-3cc4-4659-a409-e2c9e206342d-kube-api-access-chkx4\") pod \"tigera-operator-789496d6f5-pjbpn\" (UID: \"62e7897b-3cc4-4659-a409-e2c9e206342d\") " pod="tigera-operator/tigera-operator-789496d6f5-pjbpn" Apr 30 00:36:30.329250 containerd[1728]: time="2025-04-30T00:36:30.328844734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsf7q,Uid:8a969d8b-bce4-40ab-b74b-b750c83fe3ca,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:30.352023 containerd[1728]: time="2025-04-30T00:36:30.351667495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-pjbpn,Uid:62e7897b-3cc4-4659-a409-e2c9e206342d,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:36:30.433560 containerd[1728]: time="2025-04-30T00:36:30.433198763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:30.433560 containerd[1728]: time="2025-04-30T00:36:30.433256003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:30.433560 containerd[1728]: time="2025-04-30T00:36:30.433282483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:30.433560 containerd[1728]: time="2025-04-30T00:36:30.433364003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:30.465180 systemd[1]: Started cri-containerd-224768d4c86670d89dd707e57fed9d4c1315c4f8f96bab28d956d18534f64313.scope - libcontainer container 224768d4c86670d89dd707e57fed9d4c1315c4f8f96bab28d956d18534f64313. Apr 30 00:36:30.486949 containerd[1728]: time="2025-04-30T00:36:30.486511940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:30.487283 containerd[1728]: time="2025-04-30T00:36:30.486580060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:30.487283 containerd[1728]: time="2025-04-30T00:36:30.486993981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:30.487283 containerd[1728]: time="2025-04-30T00:36:30.487189861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:30.495150 containerd[1728]: time="2025-04-30T00:36:30.495083475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rsf7q,Uid:8a969d8b-bce4-40ab-b74b-b750c83fe3ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"224768d4c86670d89dd707e57fed9d4c1315c4f8f96bab28d956d18534f64313\"" Apr 30 00:36:30.500419 containerd[1728]: time="2025-04-30T00:36:30.500374005Z" level=info msg="CreateContainer within sandbox \"224768d4c86670d89dd707e57fed9d4c1315c4f8f96bab28d956d18534f64313\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:36:30.514478 systemd[1]: Started cri-containerd-ffcde2036851921ec9b8b585574083a88a38d03ab944c2eec1245874f14e2845.scope - libcontainer container ffcde2036851921ec9b8b585574083a88a38d03ab944c2eec1245874f14e2845. Apr 30 00:36:30.546632 containerd[1728]: time="2025-04-30T00:36:30.546582249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-pjbpn,Uid:62e7897b-3cc4-4659-a409-e2c9e206342d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ffcde2036851921ec9b8b585574083a88a38d03ab944c2eec1245874f14e2845\"" Apr 30 00:36:30.549642 containerd[1728]: time="2025-04-30T00:36:30.549587814Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:36:30.571382 containerd[1728]: time="2025-04-30T00:36:30.571330814Z" level=info msg="CreateContainer within sandbox \"224768d4c86670d89dd707e57fed9d4c1315c4f8f96bab28d956d18534f64313\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0bf14615d67ff8169210d54d46bc86ad3ed98b29b635f50d48ee2494e3f321b\"" Apr 30 00:36:30.572983 containerd[1728]: time="2025-04-30T00:36:30.571897935Z" level=info msg="StartContainer for \"f0bf14615d67ff8169210d54d46bc86ad3ed98b29b635f50d48ee2494e3f321b\"" Apr 30 00:36:30.595537 systemd[1]: Started cri-containerd-f0bf14615d67ff8169210d54d46bc86ad3ed98b29b635f50d48ee2494e3f321b.scope - libcontainer container f0bf14615d67ff8169210d54d46bc86ad3ed98b29b635f50d48ee2494e3f321b. Apr 30 00:36:30.628708 containerd[1728]: time="2025-04-30T00:36:30.628546118Z" level=info msg="StartContainer for \"f0bf14615d67ff8169210d54d46bc86ad3ed98b29b635f50d48ee2494e3f321b\" returns successfully" Apr 30 00:36:31.160454 kubelet[3188]: I0430 00:36:31.160385 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rsf7q" podStartSLOduration=2.160364404 podStartE2EDuration="2.160364404s" podCreationTimestamp="2025-04-30 00:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:36:31.142347331 +0000 UTC m=+8.214420918" watchObservedRunningTime="2025-04-30 00:36:31.160364404 +0000 UTC m=+8.232437991" Apr 30 00:36:31.538294 sudo[2300]: pam_unix(sudo:session): session closed for user root Apr 30 00:36:31.619448 sshd[2297]: pam_unix(sshd:session): session closed for user core Apr 30 00:36:31.624373 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:58614.service: Deactivated successfully. Apr 30 00:36:31.627934 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:36:31.628508 systemd[1]: session-9.scope: Consumed 7.699s CPU time, 147.9M memory peak, 0B memory swap peak. Apr 30 00:36:31.629884 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:36:31.631529 systemd-logind[1694]: Removed session 9. Apr 30 00:36:36.252520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220747469.mount: Deactivated successfully. Apr 30 00:36:36.782536 containerd[1728]: time="2025-04-30T00:36:36.782480788Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:36.786003 containerd[1728]: time="2025-04-30T00:36:36.785824434Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:36:36.791285 containerd[1728]: time="2025-04-30T00:36:36.791196843Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:36.799292 containerd[1728]: time="2025-04-30T00:36:36.799206456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:36.800126 containerd[1728]: time="2025-04-30T00:36:36.799989617Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 6.250236642s" Apr 30 00:36:36.800126 containerd[1728]: time="2025-04-30T00:36:36.800026178Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:36:36.804822 containerd[1728]: time="2025-04-30T00:36:36.804689105Z" level=info msg="CreateContainer within sandbox \"ffcde2036851921ec9b8b585574083a88a38d03ab944c2eec1245874f14e2845\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:36:36.852313 containerd[1728]: time="2025-04-30T00:36:36.852236385Z" level=info msg="CreateContainer within sandbox \"ffcde2036851921ec9b8b585574083a88a38d03ab944c2eec1245874f14e2845\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e\"" Apr 30 00:36:36.853457 containerd[1728]: time="2025-04-30T00:36:36.853422747Z" level=info msg="StartContainer for \"10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e\"" Apr 30 00:36:36.884521 systemd[1]: Started cri-containerd-10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e.scope - libcontainer container 10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e. Apr 30 00:36:36.919836 containerd[1728]: time="2025-04-30T00:36:36.919703938Z" level=info msg="StartContainer for \"10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e\" returns successfully" Apr 30 00:36:37.231910 systemd[1]: run-containerd-runc-k8s.io-10b0ca3f2e0b887a373303da2de9b8bdf55bb248879baa3ef07bb24c292f458e-runc.t1NEak.mount: Deactivated successfully. Apr 30 00:36:40.311227 kubelet[3188]: I0430 00:36:40.309807 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-pjbpn" podStartSLOduration=5.05653289 podStartE2EDuration="11.309783138s" podCreationTimestamp="2025-04-30 00:36:29 +0000 UTC" firstStartedPulling="2025-04-30 00:36:30.548214412 +0000 UTC m=+7.620287999" lastFinishedPulling="2025-04-30 00:36:36.80146466 +0000 UTC m=+13.873538247" observedRunningTime="2025-04-30 00:36:37.143825074 +0000 UTC m=+14.215898661" watchObservedRunningTime="2025-04-30 00:36:40.309783138 +0000 UTC m=+17.381856725" Apr 30 00:36:40.321215 systemd[1]: Created slice kubepods-besteffort-podb026fe82_d7b8_4624_9574_97f33f3ed5ea.slice - libcontainer container kubepods-besteffort-podb026fe82_d7b8_4624_9574_97f33f3ed5ea.slice. Apr 30 00:36:40.356295 kubelet[3188]: I0430 00:36:40.355850 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b026fe82-d7b8-4624-9574-97f33f3ed5ea-typha-certs\") pod \"calico-typha-76b9bb856d-2bll2\" (UID: \"b026fe82-d7b8-4624-9574-97f33f3ed5ea\") " pod="calico-system/calico-typha-76b9bb856d-2bll2" Apr 30 00:36:40.356295 kubelet[3188]: I0430 00:36:40.355912 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b026fe82-d7b8-4624-9574-97f33f3ed5ea-tigera-ca-bundle\") pod \"calico-typha-76b9bb856d-2bll2\" (UID: \"b026fe82-d7b8-4624-9574-97f33f3ed5ea\") " pod="calico-system/calico-typha-76b9bb856d-2bll2" Apr 30 00:36:40.356295 kubelet[3188]: I0430 00:36:40.355939 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr4l\" (UniqueName: \"kubernetes.io/projected/b026fe82-d7b8-4624-9574-97f33f3ed5ea-kube-api-access-bvr4l\") pod \"calico-typha-76b9bb856d-2bll2\" (UID: \"b026fe82-d7b8-4624-9574-97f33f3ed5ea\") " pod="calico-system/calico-typha-76b9bb856d-2bll2" Apr 30 00:36:40.532372 systemd[1]: Created slice kubepods-besteffort-podcc2e8170_49d5_4133_a6d4_51ce14ea39ab.slice - libcontainer container kubepods-besteffort-podcc2e8170_49d5_4133_a6d4_51ce14ea39ab.slice. Apr 30 00:36:40.557538 kubelet[3188]: I0430 00:36:40.557482 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-node-certs\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557538 kubelet[3188]: I0430 00:36:40.557534 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-flexvol-driver-host\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557711 kubelet[3188]: I0430 00:36:40.557560 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-cni-log-dir\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557711 kubelet[3188]: I0430 00:36:40.557581 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pf2l\" (UniqueName: \"kubernetes.io/projected/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-kube-api-access-8pf2l\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557711 kubelet[3188]: I0430 00:36:40.557599 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-lib-modules\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557711 kubelet[3188]: I0430 00:36:40.557615 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-var-run-calico\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557711 kubelet[3188]: I0430 00:36:40.557630 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-cni-bin-dir\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557828 kubelet[3188]: I0430 00:36:40.557645 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-xtables-lock\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557828 kubelet[3188]: I0430 00:36:40.557661 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-policysync\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557828 kubelet[3188]: I0430 00:36:40.557677 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-cni-net-dir\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557828 kubelet[3188]: I0430 00:36:40.557696 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-tigera-ca-bundle\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.557828 kubelet[3188]: I0430 00:36:40.557714 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc2e8170-49d5-4133-a6d4-51ce14ea39ab-var-lib-calico\") pod \"calico-node-lvlsj\" (UID: \"cc2e8170-49d5-4133-a6d4-51ce14ea39ab\") " pod="calico-system/calico-node-lvlsj" Apr 30 00:36:40.626629 containerd[1728]: time="2025-04-30T00:36:40.626254908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b9bb856d-2bll2,Uid:b026fe82-d7b8-4624-9574-97f33f3ed5ea,Namespace:calico-system,Attempt:0,}" Apr 30 00:36:40.651064 kubelet[3188]: E0430 00:36:40.650673 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:40.659372 kubelet[3188]: I0430 00:36:40.657884 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7decfb7d-0a1b-482d-a161-616634f85838-varrun\") pod \"csi-node-driver-srbdg\" (UID: \"7decfb7d-0a1b-482d-a161-616634f85838\") " pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:40.659372 kubelet[3188]: I0430 00:36:40.657939 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7decfb7d-0a1b-482d-a161-616634f85838-socket-dir\") pod \"csi-node-driver-srbdg\" (UID: \"7decfb7d-0a1b-482d-a161-616634f85838\") " pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:40.659372 kubelet[3188]: I0430 00:36:40.657960 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7decfb7d-0a1b-482d-a161-616634f85838-kubelet-dir\") pod \"csi-node-driver-srbdg\" (UID: \"7decfb7d-0a1b-482d-a161-616634f85838\") " pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:40.659372 kubelet[3188]: I0430 00:36:40.657975 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7decfb7d-0a1b-482d-a161-616634f85838-registration-dir\") pod \"csi-node-driver-srbdg\" (UID: \"7decfb7d-0a1b-482d-a161-616634f85838\") " pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:40.659372 kubelet[3188]: I0430 00:36:40.657991 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9gl\" (UniqueName: \"kubernetes.io/projected/7decfb7d-0a1b-482d-a161-616634f85838-kube-api-access-5x9gl\") pod \"csi-node-driver-srbdg\" (UID: \"7decfb7d-0a1b-482d-a161-616634f85838\") " pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:40.662151 kubelet[3188]: E0430 00:36:40.662123 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.662437 kubelet[3188]: W0430 00:36:40.662417 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.662673 kubelet[3188]: E0430 00:36:40.662622 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.663772 kubelet[3188]: E0430 00:36:40.663731 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.664343 kubelet[3188]: W0430 00:36:40.663930 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.664343 kubelet[3188]: E0430 00:36:40.663960 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.665425 kubelet[3188]: E0430 00:36:40.665291 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.665425 kubelet[3188]: W0430 00:36:40.665310 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.665894 kubelet[3188]: E0430 00:36:40.665760 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.666803 kubelet[3188]: E0430 00:36:40.666727 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.666803 kubelet[3188]: W0430 00:36:40.666746 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.667348 kubelet[3188]: E0430 00:36:40.666778 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.668834 kubelet[3188]: E0430 00:36:40.668687 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.668834 kubelet[3188]: W0430 00:36:40.668812 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.669151 kubelet[3188]: E0430 00:36:40.668989 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.669871 kubelet[3188]: E0430 00:36:40.669705 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.669871 kubelet[3188]: W0430 00:36:40.669726 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.670172 kubelet[3188]: E0430 00:36:40.670015 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.671674 kubelet[3188]: E0430 00:36:40.671541 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.671674 kubelet[3188]: W0430 00:36:40.671557 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.673027 kubelet[3188]: E0430 00:36:40.672798 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.673027 kubelet[3188]: W0430 00:36:40.672814 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.673581 kubelet[3188]: E0430 00:36:40.673258 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.673581 kubelet[3188]: E0430 00:36:40.673320 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.674606 kubelet[3188]: E0430 00:36:40.674210 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.674606 kubelet[3188]: W0430 00:36:40.674424 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.678402 kubelet[3188]: E0430 00:36:40.677434 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.678402 kubelet[3188]: E0430 00:36:40.677649 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.678402 kubelet[3188]: W0430 00:36:40.677661 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.678402 kubelet[3188]: E0430 00:36:40.677864 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.678402 kubelet[3188]: E0430 00:36:40.678125 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.678402 kubelet[3188]: W0430 00:36:40.678137 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.678402 kubelet[3188]: E0430 00:36:40.678235 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.679165 kubelet[3188]: E0430 00:36:40.679147 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.679759 kubelet[3188]: W0430 00:36:40.679448 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.679759 kubelet[3188]: E0430 00:36:40.679732 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.680945 kubelet[3188]: E0430 00:36:40.680840 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.680945 kubelet[3188]: W0430 00:36:40.680902 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.681569 kubelet[3188]: E0430 00:36:40.681336 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.682143 kubelet[3188]: E0430 00:36:40.681970 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.682143 kubelet[3188]: W0430 00:36:40.681984 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.682143 kubelet[3188]: E0430 00:36:40.682114 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.683024 kubelet[3188]: E0430 00:36:40.682837 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.683024 kubelet[3188]: W0430 00:36:40.682851 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.683024 kubelet[3188]: E0430 00:36:40.683002 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.684027 kubelet[3188]: E0430 00:36:40.683750 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.684027 kubelet[3188]: W0430 00:36:40.683869 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.684027 kubelet[3188]: E0430 00:36:40.683950 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.685064 kubelet[3188]: E0430 00:36:40.684685 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.685064 kubelet[3188]: W0430 00:36:40.684711 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.685064 kubelet[3188]: E0430 00:36:40.685028 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.686373 kubelet[3188]: E0430 00:36:40.686171 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.686373 kubelet[3188]: W0430 00:36:40.686188 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.686373 kubelet[3188]: E0430 00:36:40.686324 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.686932 kubelet[3188]: E0430 00:36:40.686786 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.686932 kubelet[3188]: W0430 00:36:40.686805 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.687325 kubelet[3188]: E0430 00:36:40.687214 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.688063 kubelet[3188]: E0430 00:36:40.687962 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.688063 kubelet[3188]: W0430 00:36:40.687977 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.689506 kubelet[3188]: E0430 00:36:40.689340 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.690140 kubelet[3188]: E0430 00:36:40.690121 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.690579 kubelet[3188]: W0430 00:36:40.690284 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.691423 kubelet[3188]: E0430 00:36:40.691376 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.692129 kubelet[3188]: E0430 00:36:40.691878 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.692129 kubelet[3188]: W0430 00:36:40.691896 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.693002 kubelet[3188]: E0430 00:36:40.692890 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.694247 kubelet[3188]: E0430 00:36:40.693294 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.694247 kubelet[3188]: W0430 00:36:40.693372 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.694535 kubelet[3188]: E0430 00:36:40.694439 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.698150 kubelet[3188]: E0430 00:36:40.697684 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.698150 kubelet[3188]: W0430 00:36:40.697712 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.702734 kubelet[3188]: E0430 00:36:40.700924 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.703537 kubelet[3188]: W0430 00:36:40.701883 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.703827 kubelet[3188]: E0430 00:36:40.703811 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.703901 kubelet[3188]: W0430 00:36:40.703888 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.704647 kubelet[3188]: E0430 00:36:40.704629 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.704861 kubelet[3188]: W0430 00:36:40.704751 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.705329 containerd[1728]: time="2025-04-30T00:36:40.704481439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:40.705329 containerd[1728]: time="2025-04-30T00:36:40.704603679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:40.705329 containerd[1728]: time="2025-04-30T00:36:40.704621119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:40.705329 containerd[1728]: time="2025-04-30T00:36:40.704865400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:40.705690 kubelet[3188]: E0430 00:36:40.705675 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.705774 kubelet[3188]: W0430 00:36:40.705760 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.707116 kubelet[3188]: E0430 00:36:40.706962 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.707116 kubelet[3188]: W0430 00:36:40.706978 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.707116 kubelet[3188]: E0430 00:36:40.706995 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.707116 kubelet[3188]: E0430 00:36:40.707029 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.707551 kubelet[3188]: E0430 00:36:40.707502 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.707809 kubelet[3188]: E0430 00:36:40.707658 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.707809 kubelet[3188]: E0430 00:36:40.707686 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.707809 kubelet[3188]: E0430 00:36:40.707700 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.709947 kubelet[3188]: E0430 00:36:40.709898 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.709947 kubelet[3188]: W0430 00:36:40.709918 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.710274 kubelet[3188]: E0430 00:36:40.710153 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.711769 kubelet[3188]: E0430 00:36:40.711698 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.711769 kubelet[3188]: W0430 00:36:40.711723 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.711769 kubelet[3188]: E0430 00:36:40.711738 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.714575 kubelet[3188]: E0430 00:36:40.714482 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.714575 kubelet[3188]: W0430 00:36:40.714513 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.714575 kubelet[3188]: E0430 00:36:40.714533 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.739800 systemd[1]: Started cri-containerd-34eae84cdb22ea9a268f7cbf082e952b64f64de24424d4c47ec0268c5d6f478b.scope - libcontainer container 34eae84cdb22ea9a268f7cbf082e952b64f64de24424d4c47ec0268c5d6f478b. Apr 30 00:36:40.760202 kubelet[3188]: E0430 00:36:40.760171 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.760519 kubelet[3188]: W0430 00:36:40.760497 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.760671 kubelet[3188]: E0430 00:36:40.760640 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.761832 kubelet[3188]: E0430 00:36:40.761812 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.762099 kubelet[3188]: W0430 00:36:40.761930 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.762099 kubelet[3188]: E0430 00:36:40.761979 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.762826 kubelet[3188]: E0430 00:36:40.762677 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.762826 kubelet[3188]: W0430 00:36:40.762693 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.762826 kubelet[3188]: E0430 00:36:40.762718 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.763223 kubelet[3188]: E0430 00:36:40.763071 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.763223 kubelet[3188]: W0430 00:36:40.763084 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.763511 kubelet[3188]: E0430 00:36:40.763361 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.763511 kubelet[3188]: E0430 00:36:40.763451 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.763511 kubelet[3188]: W0430 00:36:40.763459 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.763511 kubelet[3188]: E0430 00:36:40.763471 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.763940 kubelet[3188]: E0430 00:36:40.763844 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.763940 kubelet[3188]: W0430 00:36:40.763856 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.764185 kubelet[3188]: E0430 00:36:40.764031 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.764461 kubelet[3188]: E0430 00:36:40.764381 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.764461 kubelet[3188]: W0430 00:36:40.764392 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.764659 kubelet[3188]: E0430 00:36:40.764571 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.764973 kubelet[3188]: E0430 00:36:40.764912 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.764973 kubelet[3188]: W0430 00:36:40.764925 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.765290 kubelet[3188]: E0430 00:36:40.765013 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.765716 kubelet[3188]: E0430 00:36:40.765612 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.765874 kubelet[3188]: W0430 00:36:40.765790 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.766385 kubelet[3188]: E0430 00:36:40.766208 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.766385 kubelet[3188]: E0430 00:36:40.766289 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.766385 kubelet[3188]: W0430 00:36:40.766297 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.766664 kubelet[3188]: E0430 00:36:40.766569 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.766959 kubelet[3188]: E0430 00:36:40.766937 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.767102 kubelet[3188]: W0430 00:36:40.767027 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.767281 kubelet[3188]: E0430 00:36:40.767220 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.767497 kubelet[3188]: E0430 00:36:40.767375 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.767497 kubelet[3188]: W0430 00:36:40.767392 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.767884 kubelet[3188]: E0430 00:36:40.767702 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.767884 kubelet[3188]: E0430 00:36:40.767837 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.767884 kubelet[3188]: W0430 00:36:40.767846 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.768070 kubelet[3188]: E0430 00:36:40.767990 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.768326 kubelet[3188]: E0430 00:36:40.768223 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.768326 kubelet[3188]: W0430 00:36:40.768234 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.768501 kubelet[3188]: E0430 00:36:40.768417 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.768671 kubelet[3188]: E0430 00:36:40.768595 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.768671 kubelet[3188]: W0430 00:36:40.768605 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.768837 kubelet[3188]: E0430 00:36:40.768754 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.769200 kubelet[3188]: E0430 00:36:40.769118 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.769200 kubelet[3188]: W0430 00:36:40.769131 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.769418 kubelet[3188]: E0430 00:36:40.769304 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.769634 kubelet[3188]: E0430 00:36:40.769550 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.769634 kubelet[3188]: W0430 00:36:40.769561 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.769766 kubelet[3188]: E0430 00:36:40.769716 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.770121 kubelet[3188]: E0430 00:36:40.769949 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.770121 kubelet[3188]: W0430 00:36:40.769961 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.770658 kubelet[3188]: E0430 00:36:40.770492 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.770658 kubelet[3188]: E0430 00:36:40.770583 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.770658 kubelet[3188]: W0430 00:36:40.770590 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.770973 kubelet[3188]: E0430 00:36:40.770796 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.771205 kubelet[3188]: E0430 00:36:40.771191 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.771381 kubelet[3188]: W0430 00:36:40.771272 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.771633 kubelet[3188]: E0430 00:36:40.771568 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.772077 kubelet[3188]: E0430 00:36:40.771989 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.772077 kubelet[3188]: W0430 00:36:40.772004 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.772474 kubelet[3188]: E0430 00:36:40.772299 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.773616 kubelet[3188]: E0430 00:36:40.773541 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.773616 kubelet[3188]: W0430 00:36:40.773558 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.773889 kubelet[3188]: E0430 00:36:40.773704 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.774007 kubelet[3188]: E0430 00:36:40.773995 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.774130 kubelet[3188]: W0430 00:36:40.774057 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.774196 kubelet[3188]: E0430 00:36:40.774184 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.774818 kubelet[3188]: E0430 00:36:40.774452 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.774818 kubelet[3188]: W0430 00:36:40.774464 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.774818 kubelet[3188]: E0430 00:36:40.774478 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.775083 kubelet[3188]: E0430 00:36:40.775070 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.775150 kubelet[3188]: W0430 00:36:40.775138 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.775207 kubelet[3188]: E0430 00:36:40.775196 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.801217 kubelet[3188]: E0430 00:36:40.801077 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:40.801736 kubelet[3188]: W0430 00:36:40.801643 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:40.801736 kubelet[3188]: E0430 00:36:40.801690 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:40.837805 containerd[1728]: time="2025-04-30T00:36:40.837759743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvlsj,Uid:cc2e8170-49d5-4133-a6d4-51ce14ea39ab,Namespace:calico-system,Attempt:0,}" Apr 30 00:36:40.840881 containerd[1728]: time="2025-04-30T00:36:40.840715028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b9bb856d-2bll2,Uid:b026fe82-d7b8-4624-9574-97f33f3ed5ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"34eae84cdb22ea9a268f7cbf082e952b64f64de24424d4c47ec0268c5d6f478b\"" Apr 30 00:36:40.845447 containerd[1728]: time="2025-04-30T00:36:40.845391475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:36:40.912146 containerd[1728]: time="2025-04-30T00:36:40.902308731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:36:40.912146 containerd[1728]: time="2025-04-30T00:36:40.902368931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:36:40.912146 containerd[1728]: time="2025-04-30T00:36:40.902389251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:40.912146 containerd[1728]: time="2025-04-30T00:36:40.902491171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:36:40.923480 systemd[1]: Started cri-containerd-7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d.scope - libcontainer container 7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d. Apr 30 00:36:40.952140 containerd[1728]: time="2025-04-30T00:36:40.952025814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvlsj,Uid:cc2e8170-49d5-4133-a6d4-51ce14ea39ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\"" Apr 30 00:36:42.048353 kubelet[3188]: E0430 00:36:42.048147 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:43.122650 containerd[1728]: time="2025-04-30T00:36:43.122593011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:43.128032 containerd[1728]: time="2025-04-30T00:36:43.127983900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:36:43.138369 containerd[1728]: time="2025-04-30T00:36:43.137184675Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:43.142233 containerd[1728]: time="2025-04-30T00:36:43.142149763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:43.143083 containerd[1728]: time="2025-04-30T00:36:43.142864045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.296533808s" Apr 30 00:36:43.143083 containerd[1728]: time="2025-04-30T00:36:43.142893405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:36:43.144864 containerd[1728]: time="2025-04-30T00:36:43.144838008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:36:43.155698 containerd[1728]: time="2025-04-30T00:36:43.155491426Z" level=info msg="CreateContainer within sandbox \"34eae84cdb22ea9a268f7cbf082e952b64f64de24424d4c47ec0268c5d6f478b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:36:43.215331 containerd[1728]: time="2025-04-30T00:36:43.215235486Z" level=info msg="CreateContainer within sandbox \"34eae84cdb22ea9a268f7cbf082e952b64f64de24424d4c47ec0268c5d6f478b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1220b9b0e0ea8120ed1070d9e303a1498813123e61c2c5ebda89ee3c0cdbca80\"" Apr 30 00:36:43.216562 containerd[1728]: time="2025-04-30T00:36:43.216503288Z" level=info msg="StartContainer for \"1220b9b0e0ea8120ed1070d9e303a1498813123e61c2c5ebda89ee3c0cdbca80\"" Apr 30 00:36:43.251479 systemd[1]: Started cri-containerd-1220b9b0e0ea8120ed1070d9e303a1498813123e61c2c5ebda89ee3c0cdbca80.scope - libcontainer container 1220b9b0e0ea8120ed1070d9e303a1498813123e61c2c5ebda89ee3c0cdbca80. Apr 30 00:36:43.290821 containerd[1728]: time="2025-04-30T00:36:43.290771892Z" level=info msg="StartContainer for \"1220b9b0e0ea8120ed1070d9e303a1498813123e61c2c5ebda89ee3c0cdbca80\" returns successfully" Apr 30 00:36:44.048942 kubelet[3188]: E0430 00:36:44.048890 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:44.166534 kubelet[3188]: I0430 00:36:44.166457 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76b9bb856d-2bll2" podStartSLOduration=1.8674085150000002 podStartE2EDuration="4.166440927s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="2025-04-30 00:36:40.844708634 +0000 UTC m=+17.916782221" lastFinishedPulling="2025-04-30 00:36:43.143741046 +0000 UTC m=+20.215814633" observedRunningTime="2025-04-30 00:36:44.165444206 +0000 UTC m=+21.237517793" watchObservedRunningTime="2025-04-30 00:36:44.166440927 +0000 UTC m=+21.238514474" Apr 30 00:36:44.175114 kubelet[3188]: E0430 00:36:44.175017 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.175114 kubelet[3188]: W0430 00:36:44.175043 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.175114 kubelet[3188]: E0430 00:36:44.175064 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.176590 kubelet[3188]: E0430 00:36:44.176363 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.176590 kubelet[3188]: W0430 00:36:44.176383 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.176590 kubelet[3188]: E0430 00:36:44.176429 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.176957 kubelet[3188]: E0430 00:36:44.176804 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.176957 kubelet[3188]: W0430 00:36:44.176815 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.176957 kubelet[3188]: E0430 00:36:44.176828 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.177169 kubelet[3188]: E0430 00:36:44.177094 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.177169 kubelet[3188]: W0430 00:36:44.177105 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.177169 kubelet[3188]: E0430 00:36:44.177117 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.177814 kubelet[3188]: E0430 00:36:44.177620 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.177814 kubelet[3188]: W0430 00:36:44.177634 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.177814 kubelet[3188]: E0430 00:36:44.177645 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.178421 kubelet[3188]: E0430 00:36:44.178244 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.178421 kubelet[3188]: W0430 00:36:44.178258 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.178421 kubelet[3188]: E0430 00:36:44.178292 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.179575 kubelet[3188]: E0430 00:36:44.178622 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.179575 kubelet[3188]: W0430 00:36:44.178635 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.179575 kubelet[3188]: E0430 00:36:44.178646 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.180084 kubelet[3188]: E0430 00:36:44.179915 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.180084 kubelet[3188]: W0430 00:36:44.179930 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.180084 kubelet[3188]: E0430 00:36:44.179942 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.180343 kubelet[3188]: E0430 00:36:44.180240 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.180343 kubelet[3188]: W0430 00:36:44.180250 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.180343 kubelet[3188]: E0430 00:36:44.180297 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.180696 kubelet[3188]: E0430 00:36:44.180565 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.180696 kubelet[3188]: W0430 00:36:44.180576 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.180696 kubelet[3188]: E0430 00:36:44.180589 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.180956 kubelet[3188]: E0430 00:36:44.180844 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.180956 kubelet[3188]: W0430 00:36:44.180855 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.180956 kubelet[3188]: E0430 00:36:44.180864 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.181741 kubelet[3188]: E0430 00:36:44.181664 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.181741 kubelet[3188]: W0430 00:36:44.181677 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.181741 kubelet[3188]: E0430 00:36:44.181689 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.182244 kubelet[3188]: E0430 00:36:44.182042 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.182244 kubelet[3188]: W0430 00:36:44.182055 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.182244 kubelet[3188]: E0430 00:36:44.182065 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.182590 kubelet[3188]: E0430 00:36:44.182476 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.182590 kubelet[3188]: W0430 00:36:44.182489 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.182590 kubelet[3188]: E0430 00:36:44.182499 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.182834 kubelet[3188]: E0430 00:36:44.182754 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.182834 kubelet[3188]: W0430 00:36:44.182765 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.182834 kubelet[3188]: E0430 00:36:44.182775 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.190829 kubelet[3188]: E0430 00:36:44.190797 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.190829 kubelet[3188]: W0430 00:36:44.190819 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.191000 kubelet[3188]: E0430 00:36:44.190839 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.191034 kubelet[3188]: E0430 00:36:44.191003 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.191034 kubelet[3188]: W0430 00:36:44.191012 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.191034 kubelet[3188]: E0430 00:36:44.191020 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.191341 kubelet[3188]: E0430 00:36:44.191318 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.191341 kubelet[3188]: W0430 00:36:44.191335 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.191442 kubelet[3188]: E0430 00:36:44.191351 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.191643 kubelet[3188]: E0430 00:36:44.191551 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.191643 kubelet[3188]: W0430 00:36:44.191564 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.191643 kubelet[3188]: E0430 00:36:44.191574 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.191847 kubelet[3188]: E0430 00:36:44.191717 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.191847 kubelet[3188]: W0430 00:36:44.191724 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.191847 kubelet[3188]: E0430 00:36:44.191741 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.192052 kubelet[3188]: E0430 00:36:44.191883 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.192052 kubelet[3188]: W0430 00:36:44.191891 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.192052 kubelet[3188]: E0430 00:36:44.191909 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.192353 kubelet[3188]: E0430 00:36:44.192090 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.192353 kubelet[3188]: W0430 00:36:44.192098 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.192353 kubelet[3188]: E0430 00:36:44.192133 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.192625 kubelet[3188]: E0430 00:36:44.192602 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.192625 kubelet[3188]: W0430 00:36:44.192623 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.192699 kubelet[3188]: E0430 00:36:44.192641 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.194495 kubelet[3188]: E0430 00:36:44.194469 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.194495 kubelet[3188]: W0430 00:36:44.194487 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.194599 kubelet[3188]: E0430 00:36:44.194591 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.194767 kubelet[3188]: E0430 00:36:44.194752 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.194767 kubelet[3188]: W0430 00:36:44.194764 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.194862 kubelet[3188]: E0430 00:36:44.194846 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.195062 kubelet[3188]: E0430 00:36:44.195043 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.195062 kubelet[3188]: W0430 00:36:44.195058 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.195216 kubelet[3188]: E0430 00:36:44.195127 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.195318 kubelet[3188]: E0430 00:36:44.195302 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.195318 kubelet[3188]: W0430 00:36:44.195315 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.195377 kubelet[3188]: E0430 00:36:44.195327 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.195542 kubelet[3188]: E0430 00:36:44.195525 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.195542 kubelet[3188]: W0430 00:36:44.195539 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.195606 kubelet[3188]: E0430 00:36:44.195553 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.196729 kubelet[3188]: E0430 00:36:44.196086 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.196729 kubelet[3188]: W0430 00:36:44.196113 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.196729 kubelet[3188]: E0430 00:36:44.196140 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.196729 kubelet[3188]: E0430 00:36:44.196414 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.196729 kubelet[3188]: W0430 00:36:44.196424 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.196729 kubelet[3188]: E0430 00:36:44.196434 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.196729 kubelet[3188]: E0430 00:36:44.196678 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.196729 kubelet[3188]: W0430 00:36:44.196688 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.196961 kubelet[3188]: E0430 00:36:44.196863 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.196961 kubelet[3188]: W0430 00:36:44.196889 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.196961 kubelet[3188]: E0430 00:36:44.196900 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.197028 kubelet[3188]: E0430 00:36:44.196967 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.197464 kubelet[3188]: E0430 00:36:44.197437 3188 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:36:44.197464 kubelet[3188]: W0430 00:36:44.197453 3188 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:36:44.197464 kubelet[3188]: E0430 00:36:44.197464 3188 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:36:44.628992 containerd[1728]: time="2025-04-30T00:36:44.628353336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:44.631590 containerd[1728]: time="2025-04-30T00:36:44.631534459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:36:44.636816 containerd[1728]: time="2025-04-30T00:36:44.636757302Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:44.644496 containerd[1728]: time="2025-04-30T00:36:44.644362108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:44.645228 containerd[1728]: time="2025-04-30T00:36:44.645018828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.498840898s" Apr 30 00:36:44.645228 containerd[1728]: time="2025-04-30T00:36:44.645059228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:36:44.649476 containerd[1728]: time="2025-04-30T00:36:44.648793271Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:36:44.747900 containerd[1728]: time="2025-04-30T00:36:44.747851622Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80\"" Apr 30 00:36:44.750179 containerd[1728]: time="2025-04-30T00:36:44.748595102Z" level=info msg="StartContainer for \"03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80\"" Apr 30 00:36:44.776095 systemd[1]: run-containerd-runc-k8s.io-03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80-runc.IoFpTR.mount: Deactivated successfully. Apr 30 00:36:44.783445 systemd[1]: Started cri-containerd-03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80.scope - libcontainer container 03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80. Apr 30 00:36:44.817079 containerd[1728]: time="2025-04-30T00:36:44.817033711Z" level=info msg="StartContainer for \"03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80\" returns successfully" Apr 30 00:36:44.831245 systemd[1]: cri-containerd-03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80.scope: Deactivated successfully. Apr 30 00:36:44.855076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80-rootfs.mount: Deactivated successfully. Apr 30 00:36:45.153005 kubelet[3188]: I0430 00:36:45.152697 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:36:45.680050 containerd[1728]: time="2025-04-30T00:36:45.679936246Z" level=info msg="shim disconnected" id=03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80 namespace=k8s.io Apr 30 00:36:45.680050 containerd[1728]: time="2025-04-30T00:36:45.679992646Z" level=warning msg="cleaning up after shim disconnected" id=03e5feb4cfbc6359567ac6f4b5142e08a02054789889742b4738d3ae3ec13f80 namespace=k8s.io Apr 30 00:36:45.680050 containerd[1728]: time="2025-04-30T00:36:45.680005446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:36:46.048702 kubelet[3188]: E0430 00:36:46.048554 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:46.158421 containerd[1728]: time="2025-04-30T00:36:46.158373227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:36:48.049415 kubelet[3188]: E0430 00:36:48.048545 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:48.236609 kubelet[3188]: I0430 00:36:48.236329 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:36:50.049145 kubelet[3188]: E0430 00:36:50.048594 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:50.716168 containerd[1728]: time="2025-04-30T00:36:50.716112113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:50.718849 containerd[1728]: time="2025-04-30T00:36:50.718630198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:36:50.721803 containerd[1728]: time="2025-04-30T00:36:50.721486243Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:50.728464 containerd[1728]: time="2025-04-30T00:36:50.728411695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:36:50.729393 containerd[1728]: time="2025-04-30T00:36:50.729360416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.570943669s" Apr 30 00:36:50.729393 containerd[1728]: time="2025-04-30T00:36:50.729392897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:36:50.732141 containerd[1728]: time="2025-04-30T00:36:50.732041901Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:36:50.785282 containerd[1728]: time="2025-04-30T00:36:50.785215714Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331\"" Apr 30 00:36:50.785964 containerd[1728]: time="2025-04-30T00:36:50.785887635Z" level=info msg="StartContainer for \"7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331\"" Apr 30 00:36:50.814448 systemd[1]: Started cri-containerd-7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331.scope - libcontainer container 7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331. Apr 30 00:36:50.845989 containerd[1728]: time="2025-04-30T00:36:50.845936819Z" level=info msg="StartContainer for \"7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331\" returns successfully" Apr 30 00:36:51.803094 containerd[1728]: time="2025-04-30T00:36:51.803055200Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:36:51.805837 systemd[1]: cri-containerd-7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331.scope: Deactivated successfully. Apr 30 00:36:51.827489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331-rootfs.mount: Deactivated successfully. Apr 30 00:36:51.906123 kubelet[3188]: I0430 00:36:51.906089 3188 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:36:52.138589 kubelet[3188]: W0430 00:36:51.942355 3188 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-a-cee67ba5b3" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-cee67ba5b3' and this object Apr 30 00:36:52.138589 kubelet[3188]: E0430 00:36:51.942400 3188 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.3-a-cee67ba5b3\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.3-a-cee67ba5b3' and this object" logger="UnhandledError" Apr 30 00:36:52.138589 kubelet[3188]: I0430 00:36:52.039430 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvdm\" (UniqueName: \"kubernetes.io/projected/c64586bc-5bbf-4e31-87de-887039175c19-kube-api-access-tbvdm\") pod \"calico-kube-controllers-6ffdd4748-pq8mf\" (UID: \"c64586bc-5bbf-4e31-87de-887039175c19\") " pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" Apr 30 00:36:52.138589 kubelet[3188]: I0430 00:36:52.039476 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgl8\" (UniqueName: \"kubernetes.io/projected/0e5b089a-7bcf-4e20-a4b2-4262bc369176-kube-api-access-dpgl8\") pod \"coredns-668d6bf9bc-5pklv\" (UID: \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\") " pod="kube-system/coredns-668d6bf9bc-5pklv" Apr 30 00:36:52.138589 kubelet[3188]: I0430 00:36:52.039497 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e5b089a-7bcf-4e20-a4b2-4262bc369176-config-volume\") pod \"coredns-668d6bf9bc-5pklv\" (UID: \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\") " pod="kube-system/coredns-668d6bf9bc-5pklv" Apr 30 00:36:51.949320 systemd[1]: Created slice kubepods-besteffort-pod7031837f_330f_440e_9066_6afc05e792f8.slice - libcontainer container kubepods-besteffort-pod7031837f_330f_440e_9066_6afc05e792f8.slice. Apr 30 00:36:52.139006 kubelet[3188]: I0430 00:36:52.039515 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7031837f-330f-440e-9066-6afc05e792f8-calico-apiserver-certs\") pod \"calico-apiserver-7f564694cc-qjjr5\" (UID: \"7031837f-330f-440e-9066-6afc05e792f8\") " pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" Apr 30 00:36:52.139006 kubelet[3188]: I0430 00:36:52.039531 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/453b9012-9a51-453e-90a8-b4a35c52c8de-calico-apiserver-certs\") pod \"calico-apiserver-7f564694cc-g8pzv\" (UID: \"453b9012-9a51-453e-90a8-b4a35c52c8de\") " pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" Apr 30 00:36:52.139006 kubelet[3188]: I0430 00:36:52.039553 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvhn\" (UniqueName: \"kubernetes.io/projected/dc30609a-e3b4-47b3-9c7c-bb370013278c-kube-api-access-gjvhn\") pod \"coredns-668d6bf9bc-9tvnm\" (UID: \"dc30609a-e3b4-47b3-9c7c-bb370013278c\") " pod="kube-system/coredns-668d6bf9bc-9tvnm" Apr 30 00:36:52.139006 kubelet[3188]: I0430 00:36:52.039570 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwmj\" (UniqueName: \"kubernetes.io/projected/453b9012-9a51-453e-90a8-b4a35c52c8de-kube-api-access-crwmj\") pod \"calico-apiserver-7f564694cc-g8pzv\" (UID: \"453b9012-9a51-453e-90a8-b4a35c52c8de\") " pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" Apr 30 00:36:52.139006 kubelet[3188]: I0430 00:36:52.039590 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc30609a-e3b4-47b3-9c7c-bb370013278c-config-volume\") pod \"coredns-668d6bf9bc-9tvnm\" (UID: \"dc30609a-e3b4-47b3-9c7c-bb370013278c\") " pod="kube-system/coredns-668d6bf9bc-9tvnm" Apr 30 00:36:51.967286 systemd[1]: Created slice kubepods-burstable-poddc30609a_e3b4_47b3_9c7c_bb370013278c.slice - libcontainer container kubepods-burstable-poddc30609a_e3b4_47b3_9c7c_bb370013278c.slice. Apr 30 00:36:52.139273 kubelet[3188]: I0430 00:36:52.039606 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c64586bc-5bbf-4e31-87de-887039175c19-tigera-ca-bundle\") pod \"calico-kube-controllers-6ffdd4748-pq8mf\" (UID: \"c64586bc-5bbf-4e31-87de-887039175c19\") " pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" Apr 30 00:36:52.139273 kubelet[3188]: I0430 00:36:52.039625 3188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtxw\" (UniqueName: \"kubernetes.io/projected/7031837f-330f-440e-9066-6afc05e792f8-kube-api-access-cgtxw\") pod \"calico-apiserver-7f564694cc-qjjr5\" (UID: \"7031837f-330f-440e-9066-6afc05e792f8\") " pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" Apr 30 00:36:51.976585 systemd[1]: Created slice kubepods-besteffort-podc64586bc_5bbf_4e31_87de_887039175c19.slice - libcontainer container kubepods-besteffort-podc64586bc_5bbf_4e31_87de_887039175c19.slice. Apr 30 00:36:51.985435 systemd[1]: Created slice kubepods-besteffort-pod453b9012_9a51_453e_90a8_b4a35c52c8de.slice - libcontainer container kubepods-besteffort-pod453b9012_9a51_453e_90a8_b4a35c52c8de.slice. Apr 30 00:36:51.991863 systemd[1]: Created slice kubepods-burstable-pod0e5b089a_7bcf_4e20_a4b2_4262bc369176.slice - libcontainer container kubepods-burstable-pod0e5b089a_7bcf_4e20_a4b2_4262bc369176.slice. Apr 30 00:36:52.054703 systemd[1]: Created slice kubepods-besteffort-pod7decfb7d_0a1b_482d_a161_616634f85838.slice - libcontainer container kubepods-besteffort-pod7decfb7d_0a1b_482d_a161_616634f85838.slice. Apr 30 00:36:52.146503 containerd[1728]: time="2025-04-30T00:36:52.140374603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srbdg,Uid:7decfb7d-0a1b-482d-a161-616634f85838,Namespace:calico-system,Attempt:0,}" Apr 30 00:36:52.441631 containerd[1728]: time="2025-04-30T00:36:52.441580111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ffdd4748-pq8mf,Uid:c64586bc-5bbf-4e31-87de-887039175c19,Namespace:calico-system,Attempt:0,}" Apr 30 00:36:52.449018 containerd[1728]: time="2025-04-30T00:36:52.448943042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5pklv,Uid:0e5b089a-7bcf-4e20-a4b2-4262bc369176,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:52.449244 containerd[1728]: time="2025-04-30T00:36:52.448944322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tvnm,Uid:dc30609a-e3b4-47b3-9c7c-bb370013278c,Namespace:kube-system,Attempt:0,}" Apr 30 00:36:52.996071 containerd[1728]: time="2025-04-30T00:36:52.995952691Z" level=info msg="shim disconnected" id=7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331 namespace=k8s.io Apr 30 00:36:52.996071 containerd[1728]: time="2025-04-30T00:36:52.996004091Z" level=warning msg="cleaning up after shim disconnected" id=7312b5a46e31c6c92dc3ccd5e6038a228d430572a098cdc66369cce6cbf74331 namespace=k8s.io Apr 30 00:36:52.996071 containerd[1728]: time="2025-04-30T00:36:52.996013251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:36:53.168356 kubelet[3188]: E0430 00:36:53.168247 3188 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.168356 kubelet[3188]: E0430 00:36:53.168327 3188 projected.go:194] Error preparing data for projected volume kube-api-access-crwmj for pod calico-apiserver/calico-apiserver-7f564694cc-g8pzv: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.168744 kubelet[3188]: E0430 00:36:53.168413 3188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/453b9012-9a51-453e-90a8-b4a35c52c8de-kube-api-access-crwmj podName:453b9012-9a51-453e-90a8-b4a35c52c8de nodeName:}" failed. No retries permitted until 2025-04-30 00:36:53.668393079 +0000 UTC m=+30.740466666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-crwmj" (UniqueName: "kubernetes.io/projected/453b9012-9a51-453e-90a8-b4a35c52c8de-kube-api-access-crwmj") pod "calico-apiserver-7f564694cc-g8pzv" (UID: "453b9012-9a51-453e-90a8-b4a35c52c8de") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.187784 kubelet[3188]: E0430 00:36:53.186155 3188 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.187784 kubelet[3188]: E0430 00:36:53.186188 3188 projected.go:194] Error preparing data for projected volume kube-api-access-cgtxw for pod calico-apiserver/calico-apiserver-7f564694cc-qjjr5: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.187784 kubelet[3188]: E0430 00:36:53.186234 3188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7031837f-330f-440e-9066-6afc05e792f8-kube-api-access-cgtxw podName:7031837f-330f-440e-9066-6afc05e792f8 nodeName:}" failed. No retries permitted until 2025-04-30 00:36:53.686218266 +0000 UTC m=+30.758291853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgtxw" (UniqueName: "kubernetes.io/projected/7031837f-330f-440e-9066-6afc05e792f8-kube-api-access-cgtxw") pod "calico-apiserver-7f564694cc-qjjr5" (UID: "7031837f-330f-440e-9066-6afc05e792f8") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:36:53.189190 containerd[1728]: time="2025-04-30T00:36:53.188773710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:36:53.213990 containerd[1728]: time="2025-04-30T00:36:53.213872669Z" level=error msg="Failed to destroy network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.214973 containerd[1728]: time="2025-04-30T00:36:53.214856951Z" level=error msg="encountered an error cleaning up failed sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.215166 containerd[1728]: time="2025-04-30T00:36:53.214961231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srbdg,Uid:7decfb7d-0a1b-482d-a161-616634f85838,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.215993 kubelet[3188]: E0430 00:36:53.215834 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.215993 kubelet[3188]: E0430 00:36:53.215963 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:53.216434 kubelet[3188]: E0430 00:36:53.216326 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srbdg" Apr 30 00:36:53.218365 kubelet[3188]: E0430 00:36:53.216829 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-srbdg_calico-system(7decfb7d-0a1b-482d-a161-616634f85838)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-srbdg_calico-system(7decfb7d-0a1b-482d-a161-616634f85838)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:53.315739 containerd[1728]: time="2025-04-30T00:36:53.315013226Z" level=error msg="Failed to destroy network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.316310 containerd[1728]: time="2025-04-30T00:36:53.316248028Z" level=error msg="encountered an error cleaning up failed sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.316493 containerd[1728]: time="2025-04-30T00:36:53.316437908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ffdd4748-pq8mf,Uid:c64586bc-5bbf-4e31-87de-887039175c19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.317222 kubelet[3188]: E0430 00:36:53.316827 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.317222 kubelet[3188]: E0430 00:36:53.316891 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" Apr 30 00:36:53.317222 kubelet[3188]: E0430 00:36:53.316915 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" Apr 30 00:36:53.318324 kubelet[3188]: E0430 00:36:53.316959 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ffdd4748-pq8mf_calico-system(c64586bc-5bbf-4e31-87de-887039175c19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ffdd4748-pq8mf_calico-system(c64586bc-5bbf-4e31-87de-887039175c19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" podUID="c64586bc-5bbf-4e31-87de-887039175c19" Apr 30 00:36:53.323011 containerd[1728]: time="2025-04-30T00:36:53.322875278Z" level=error msg="Failed to destroy network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.323478 containerd[1728]: time="2025-04-30T00:36:53.323372639Z" level=error msg="encountered an error cleaning up failed sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.323478 containerd[1728]: time="2025-04-30T00:36:53.323428559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5pklv,Uid:0e5b089a-7bcf-4e20-a4b2-4262bc369176,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.323815 kubelet[3188]: E0430 00:36:53.323778 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.324072 kubelet[3188]: E0430 00:36:53.323962 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5pklv" Apr 30 00:36:53.324072 kubelet[3188]: E0430 00:36:53.324012 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5pklv" Apr 30 00:36:53.324212 kubelet[3188]: E0430 00:36:53.324171 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5pklv_kube-system(0e5b089a-7bcf-4e20-a4b2-4262bc369176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5pklv_kube-system(0e5b089a-7bcf-4e20-a4b2-4262bc369176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5pklv" podUID="0e5b089a-7bcf-4e20-a4b2-4262bc369176" Apr 30 00:36:53.339202 containerd[1728]: time="2025-04-30T00:36:53.339085103Z" level=error msg="Failed to destroy network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.339454 containerd[1728]: time="2025-04-30T00:36:53.339426984Z" level=error msg="encountered an error cleaning up failed sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.339492 containerd[1728]: time="2025-04-30T00:36:53.339478504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tvnm,Uid:dc30609a-e3b4-47b3-9c7c-bb370013278c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.339883 kubelet[3188]: E0430 00:36:53.339698 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:53.339883 kubelet[3188]: E0430 00:36:53.339754 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tvnm" Apr 30 00:36:53.339883 kubelet[3188]: E0430 00:36:53.339774 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9tvnm" Apr 30 00:36:53.340038 kubelet[3188]: E0430 00:36:53.339822 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9tvnm_kube-system(dc30609a-e3b4-47b3-9c7c-bb370013278c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9tvnm_kube-system(dc30609a-e3b4-47b3-9c7c-bb370013278c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9tvnm" podUID="dc30609a-e3b4-47b3-9c7c-bb370013278c" Apr 30 00:36:53.832628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d-shm.mount: Deactivated successfully. Apr 30 00:36:53.832722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8-shm.mount: Deactivated successfully. Apr 30 00:36:53.833075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700-shm.mount: Deactivated successfully. Apr 30 00:36:53.833151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f-shm.mount: Deactivated successfully. Apr 30 00:36:53.939874 containerd[1728]: time="2025-04-30T00:36:53.939596915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-qjjr5,Uid:7031837f-330f-440e-9066-6afc05e792f8,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:36:53.941525 containerd[1728]: time="2025-04-30T00:36:53.941490958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-g8pzv,Uid:453b9012-9a51-453e-90a8-b4a35c52c8de,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:36:54.131418 containerd[1728]: time="2025-04-30T00:36:54.130654612Z" level=error msg="Failed to destroy network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.133185 containerd[1728]: time="2025-04-30T00:36:54.132851655Z" level=error msg="encountered an error cleaning up failed sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.133185 containerd[1728]: time="2025-04-30T00:36:54.132922695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-qjjr5,Uid:7031837f-330f-440e-9066-6afc05e792f8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.134566 kubelet[3188]: E0430 00:36:54.133121 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.134566 kubelet[3188]: E0430 00:36:54.133176 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" Apr 30 00:36:54.134566 kubelet[3188]: E0430 00:36:54.133201 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" Apr 30 00:36:54.134694 kubelet[3188]: E0430 00:36:54.133244 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f564694cc-qjjr5_calico-apiserver(7031837f-330f-440e-9066-6afc05e792f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f564694cc-qjjr5_calico-apiserver(7031837f-330f-440e-9066-6afc05e792f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" podUID="7031837f-330f-440e-9066-6afc05e792f8" Apr 30 00:36:54.147043 containerd[1728]: time="2025-04-30T00:36:54.146582557Z" level=error msg="Failed to destroy network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.147043 containerd[1728]: time="2025-04-30T00:36:54.146887877Z" level=error msg="encountered an error cleaning up failed sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.147043 containerd[1728]: time="2025-04-30T00:36:54.146942277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-g8pzv,Uid:453b9012-9a51-453e-90a8-b4a35c52c8de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.147470 kubelet[3188]: E0430 00:36:54.147418 3188 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.147534 kubelet[3188]: E0430 00:36:54.147489 3188 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" Apr 30 00:36:54.147534 kubelet[3188]: E0430 00:36:54.147509 3188 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" Apr 30 00:36:54.147582 kubelet[3188]: E0430 00:36:54.147550 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f564694cc-g8pzv_calico-apiserver(453b9012-9a51-453e-90a8-b4a35c52c8de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f564694cc-g8pzv_calico-apiserver(453b9012-9a51-453e-90a8-b4a35c52c8de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" podUID="453b9012-9a51-453e-90a8-b4a35c52c8de" Apr 30 00:36:54.189033 kubelet[3188]: I0430 00:36:54.189000 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:36:54.190105 containerd[1728]: time="2025-04-30T00:36:54.189955504Z" level=info msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" Apr 30 00:36:54.190207 containerd[1728]: time="2025-04-30T00:36:54.190174184Z" level=info msg="Ensure that sandbox 75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13 in task-service has been cleanup successfully" Apr 30 00:36:54.192711 kubelet[3188]: I0430 00:36:54.192685 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:36:54.193801 containerd[1728]: time="2025-04-30T00:36:54.193521909Z" level=info msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" Apr 30 00:36:54.196418 containerd[1728]: time="2025-04-30T00:36:54.196251034Z" level=info msg="Ensure that sandbox 4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700 in task-service has been cleanup successfully" Apr 30 00:36:54.197256 kubelet[3188]: I0430 00:36:54.196711 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:36:54.198225 containerd[1728]: time="2025-04-30T00:36:54.198046356Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:36:54.198957 containerd[1728]: time="2025-04-30T00:36:54.198919598Z" level=info msg="Ensure that sandbox 541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b in task-service has been cleanup successfully" Apr 30 00:36:54.200953 kubelet[3188]: I0430 00:36:54.200858 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:36:54.204288 containerd[1728]: time="2025-04-30T00:36:54.203926966Z" level=info msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" Apr 30 00:36:54.204288 containerd[1728]: time="2025-04-30T00:36:54.204105246Z" level=info msg="Ensure that sandbox 6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d in task-service has been cleanup successfully" Apr 30 00:36:54.209672 kubelet[3188]: I0430 00:36:54.209636 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:36:54.211673 containerd[1728]: time="2025-04-30T00:36:54.211438697Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:36:54.212477 containerd[1728]: time="2025-04-30T00:36:54.212364859Z" level=info msg="Ensure that sandbox 010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8 in task-service has been cleanup successfully" Apr 30 00:36:54.214441 kubelet[3188]: I0430 00:36:54.214407 3188 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:36:54.217958 containerd[1728]: time="2025-04-30T00:36:54.217655667Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:36:54.219707 containerd[1728]: time="2025-04-30T00:36:54.219664950Z" level=info msg="Ensure that sandbox e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f in task-service has been cleanup successfully" Apr 30 00:36:54.288872 containerd[1728]: time="2025-04-30T00:36:54.288432177Z" level=error msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" failed" error="failed to destroy network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.289012 kubelet[3188]: E0430 00:36:54.288674 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:36:54.289012 kubelet[3188]: E0430 00:36:54.288738 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700"} Apr 30 00:36:54.289012 kubelet[3188]: E0430 00:36:54.288803 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c64586bc-5bbf-4e31-87de-887039175c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.289012 kubelet[3188]: E0430 00:36:54.288824 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c64586bc-5bbf-4e31-87de-887039175c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" podUID="c64586bc-5bbf-4e31-87de-887039175c19" Apr 30 00:36:54.297420 containerd[1728]: time="2025-04-30T00:36:54.296981670Z" level=error msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" failed" error="failed to destroy network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.297558 kubelet[3188]: E0430 00:36:54.297226 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:36:54.297558 kubelet[3188]: E0430 00:36:54.297287 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8"} Apr 30 00:36:54.297558 kubelet[3188]: E0430 00:36:54.297334 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.297558 kubelet[3188]: E0430 00:36:54.297355 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5pklv" podUID="0e5b089a-7bcf-4e20-a4b2-4262bc369176" Apr 30 00:36:54.307028 containerd[1728]: time="2025-04-30T00:36:54.306772125Z" level=error msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" failed" error="failed to destroy network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.307421 kubelet[3188]: E0430 00:36:54.307003 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:36:54.307421 kubelet[3188]: E0430 00:36:54.307294 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13"} Apr 30 00:36:54.307421 kubelet[3188]: E0430 00:36:54.307328 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"453b9012-9a51-453e-90a8-b4a35c52c8de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.307421 kubelet[3188]: E0430 00:36:54.307349 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"453b9012-9a51-453e-90a8-b4a35c52c8de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" podUID="453b9012-9a51-453e-90a8-b4a35c52c8de" Apr 30 00:36:54.316092 containerd[1728]: time="2025-04-30T00:36:54.315981059Z" level=error msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" failed" error="failed to destroy network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.316478 kubelet[3188]: E0430 00:36:54.316339 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:36:54.316478 kubelet[3188]: E0430 00:36:54.316389 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f"} Apr 30 00:36:54.316478 kubelet[3188]: E0430 00:36:54.316427 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7decfb7d-0a1b-482d-a161-616634f85838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.316478 kubelet[3188]: E0430 00:36:54.316448 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7decfb7d-0a1b-482d-a161-616634f85838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:36:54.319986 containerd[1728]: time="2025-04-30T00:36:54.319628665Z" level=error msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" failed" error="failed to destroy network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.320087 kubelet[3188]: E0430 00:36:54.319840 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:36:54.320087 kubelet[3188]: E0430 00:36:54.319892 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b"} Apr 30 00:36:54.320087 kubelet[3188]: E0430 00:36:54.319923 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7031837f-330f-440e-9066-6afc05e792f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.320087 kubelet[3188]: E0430 00:36:54.319944 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7031837f-330f-440e-9066-6afc05e792f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" podUID="7031837f-330f-440e-9066-6afc05e792f8" Apr 30 00:36:54.320482 containerd[1728]: time="2025-04-30T00:36:54.320442986Z" level=error msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" failed" error="failed to destroy network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:36:54.320714 kubelet[3188]: E0430 00:36:54.320679 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:36:54.320869 kubelet[3188]: E0430 00:36:54.320796 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d"} Apr 30 00:36:54.320869 kubelet[3188]: E0430 00:36:54.320824 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc30609a-e3b4-47b3-9c7c-bb370013278c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:36:54.320869 kubelet[3188]: E0430 00:36:54.320845 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc30609a-e3b4-47b3-9c7c-bb370013278c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9tvnm" podUID="dc30609a-e3b4-47b3-9c7c-bb370013278c" Apr 30 00:36:54.828183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13-shm.mount: Deactivated successfully. Apr 30 00:36:54.828378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b-shm.mount: Deactivated successfully. Apr 30 00:37:03.622513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080589997.mount: Deactivated successfully. Apr 30 00:37:05.049923 containerd[1728]: time="2025-04-30T00:37:05.049529340Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:37:05.074646 containerd[1728]: time="2025-04-30T00:37:05.074563300Z" level=error msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" failed" error="failed to destroy network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:37:05.074933 kubelet[3188]: E0430 00:37:05.074874 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:05.075276 kubelet[3188]: E0430 00:37:05.074950 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b"} Apr 30 00:37:05.075276 kubelet[3188]: E0430 00:37:05.074986 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7031837f-330f-440e-9066-6afc05e792f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:37:05.075276 kubelet[3188]: E0430 00:37:05.075007 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7031837f-330f-440e-9066-6afc05e792f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" podUID="7031837f-330f-440e-9066-6afc05e792f8" Apr 30 00:37:06.048998 containerd[1728]: time="2025-04-30T00:37:06.048939494Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:37:06.075204 containerd[1728]: time="2025-04-30T00:37:06.075130256Z" level=error msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" failed" error="failed to destroy network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:37:06.075620 kubelet[3188]: E0430 00:37:06.075463 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:06.075620 kubelet[3188]: E0430 00:37:06.075530 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f"} Apr 30 00:37:06.075620 kubelet[3188]: E0430 00:37:06.075567 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7decfb7d-0a1b-482d-a161-616634f85838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:37:06.075620 kubelet[3188]: E0430 00:37:06.075600 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7decfb7d-0a1b-482d-a161-616634f85838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srbdg" podUID="7decfb7d-0a1b-482d-a161-616634f85838" Apr 30 00:37:07.050977 containerd[1728]: time="2025-04-30T00:37:07.050655851Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:37:07.077289 containerd[1728]: time="2025-04-30T00:37:07.077200293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:07.078969 containerd[1728]: time="2025-04-30T00:37:07.078926136Z" level=error msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" failed" error="failed to destroy network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:37:07.079194 kubelet[3188]: E0430 00:37:07.079136 3188 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:07.079446 kubelet[3188]: E0430 00:37:07.079207 3188 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8"} Apr 30 00:37:07.079446 kubelet[3188]: E0430 00:37:07.079243 3188 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:37:07.079446 kubelet[3188]: E0430 00:37:07.079286 3188 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e5b089a-7bcf-4e20-a4b2-4262bc369176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5pklv" podUID="0e5b089a-7bcf-4e20-a4b2-4262bc369176" Apr 30 00:37:07.084413 containerd[1728]: time="2025-04-30T00:37:07.084375105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:37:07.090219 containerd[1728]: time="2025-04-30T00:37:07.090135434Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:07.101196 containerd[1728]: time="2025-04-30T00:37:07.101128251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:07.101976 containerd[1728]: time="2025-04-30T00:37:07.101821493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 13.913007583s" Apr 30 00:37:07.101976 containerd[1728]: time="2025-04-30T00:37:07.101861453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:37:07.113464 containerd[1728]: time="2025-04-30T00:37:07.113250751Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:37:07.170802 containerd[1728]: time="2025-04-30T00:37:07.170717722Z" level=info msg="CreateContainer within sandbox \"7515a33fa3a75568c233aac7e17e954f1d19b773a47885b01146a5580251ea2d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f\"" Apr 30 00:37:07.172695 containerd[1728]: time="2025-04-30T00:37:07.171245643Z" level=info msg="StartContainer for \"e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f\"" Apr 30 00:37:07.197444 systemd[1]: Started cri-containerd-e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f.scope - libcontainer container e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f. Apr 30 00:37:07.226993 containerd[1728]: time="2025-04-30T00:37:07.226929852Z" level=info msg="StartContainer for \"e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f\" returns successfully" Apr 30 00:37:07.482668 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:37:07.482795 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:37:08.049088 containerd[1728]: time="2025-04-30T00:37:08.049038565Z" level=info msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" Apr 30 00:37:08.049235 containerd[1728]: time="2025-04-30T00:37:08.049102806Z" level=info msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" Apr 30 00:37:08.110133 kubelet[3188]: I0430 00:37:08.110044 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lvlsj" podStartSLOduration=1.9630543889999998 podStartE2EDuration="28.110025303s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="2025-04-30 00:36:40.95574926 +0000 UTC m=+18.027822847" lastFinishedPulling="2025-04-30 00:37:07.102720214 +0000 UTC m=+44.174793761" observedRunningTime="2025-04-30 00:37:07.270855602 +0000 UTC m=+44.342929189" watchObservedRunningTime="2025-04-30 00:37:08.110025303 +0000 UTC m=+45.182098850" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.111 [INFO][4422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.111 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" iface="eth0" netns="/var/run/netns/cni-fc0337c2-d70e-82cf-85bd-401726402a55" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.112 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" iface="eth0" netns="/var/run/netns/cni-fc0337c2-d70e-82cf-85bd-401726402a55" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.112 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" iface="eth0" netns="/var/run/netns/cni-fc0337c2-d70e-82cf-85bd-401726402a55" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.112 [INFO][4422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.112 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.139 [INFO][4435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.139 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.139 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.149 [WARNING][4435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.149 [INFO][4435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.150 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:08.158496 containerd[1728]: 2025-04-30 00:37:08.154 [INFO][4422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:08.162127 containerd[1728]: time="2025-04-30T00:37:08.158663461Z" level=info msg="TearDown network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" successfully" Apr 30 00:37:08.162127 containerd[1728]: time="2025-04-30T00:37:08.158708781Z" level=info msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" returns successfully" Apr 30 00:37:08.160867 systemd[1]: run-netns-cni\x2dfc0337c2\x2dd70e\x2d82cf\x2d85bd\x2d401726402a55.mount: Deactivated successfully. Apr 30 00:37:08.163207 containerd[1728]: time="2025-04-30T00:37:08.162777348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ffdd4748-pq8mf,Uid:c64586bc-5bbf-4e31-87de-887039175c19,Namespace:calico-system,Attempt:1,}" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.114 [INFO][4423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.114 [INFO][4423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" iface="eth0" netns="/var/run/netns/cni-f064164d-95cd-1049-299b-9aa471cd0bca" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.115 [INFO][4423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" iface="eth0" netns="/var/run/netns/cni-f064164d-95cd-1049-299b-9aa471cd0bca" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.115 [INFO][4423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" iface="eth0" netns="/var/run/netns/cni-f064164d-95cd-1049-299b-9aa471cd0bca" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.115 [INFO][4423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.115 [INFO][4423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.146 [INFO][4437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.146 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.150 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.170 [WARNING][4437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.170 [INFO][4437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.172 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:08.175525 containerd[1728]: 2025-04-30 00:37:08.174 [INFO][4423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:08.177433 containerd[1728]: time="2025-04-30T00:37:08.177358851Z" level=info msg="TearDown network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" successfully" Apr 30 00:37:08.177433 containerd[1728]: time="2025-04-30T00:37:08.177395891Z" level=info msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" returns successfully" Apr 30 00:37:08.178601 containerd[1728]: time="2025-04-30T00:37:08.178463533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tvnm,Uid:dc30609a-e3b4-47b3-9c7c-bb370013278c,Namespace:kube-system,Attempt:1,}" Apr 30 00:37:08.179554 systemd[1]: run-netns-cni\x2df064164d\x2d95cd\x2d1049\x2d299b\x2d9aa471cd0bca.mount: Deactivated successfully. Apr 30 00:37:08.442347 systemd-networkd[1346]: cali323ffda34ca: Link UP Apr 30 00:37:08.442702 systemd-networkd[1346]: cali323ffda34ca: Gained carrier Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.275 [INFO][4450] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.298 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0 calico-kube-controllers-6ffdd4748- calico-system c64586bc-5bbf-4e31-87de-887039175c19 750 0 2025-04-30 00:36:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ffdd4748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 calico-kube-controllers-6ffdd4748-pq8mf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali323ffda34ca [] []}} ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.298 [INFO][4450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.343 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" HandleID="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.358 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" HandleID="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"calico-kube-controllers-6ffdd4748-pq8mf", "timestamp":"2025-04-30 00:37:08.343097317 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.358 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.358 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.358 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.360 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.365 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.381 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.384 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.387 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.387 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.389 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.397 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.1/26] block=192.168.69.0/26 handle="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.1/26] handle="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:08.458667 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.1/26] IPv6=[] ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" HandleID="k8s-pod-network.c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.409 [INFO][4450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0", GenerateName:"calico-kube-controllers-6ffdd4748-", Namespace:"calico-system", SelfLink:"", UID:"c64586bc-5bbf-4e31-87de-887039175c19", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ffdd4748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"calico-kube-controllers-6ffdd4748-pq8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323ffda34ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.409 [INFO][4450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.1/32] ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.409 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali323ffda34ca ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.442 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.442 [INFO][4450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0", GenerateName:"calico-kube-controllers-6ffdd4748-", Namespace:"calico-system", SelfLink:"", UID:"c64586bc-5bbf-4e31-87de-887039175c19", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ffdd4748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe", Pod:"calico-kube-controllers-6ffdd4748-pq8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323ffda34ca", MAC:"a6:b0:32:f3:4a:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:08.459676 containerd[1728]: 2025-04-30 00:37:08.455 [INFO][4450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe" Namespace="calico-system" Pod="calico-kube-controllers-6ffdd4748-pq8mf" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:08.487802 containerd[1728]: time="2025-04-30T00:37:08.486806748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:08.487802 containerd[1728]: time="2025-04-30T00:37:08.486885988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:08.487802 containerd[1728]: time="2025-04-30T00:37:08.486960668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:08.487802 containerd[1728]: time="2025-04-30T00:37:08.487372269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:08.506444 systemd[1]: Started cri-containerd-c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe.scope - libcontainer container c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe. Apr 30 00:37:08.509841 systemd-networkd[1346]: cali0cd009deea4: Link UP Apr 30 00:37:08.510955 systemd-networkd[1346]: cali0cd009deea4: Gained carrier Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.283 [INFO][4458] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.309 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0 coredns-668d6bf9bc- kube-system dc30609a-e3b4-47b3-9c7c-bb370013278c 751 0 2025-04-30 00:36:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 coredns-668d6bf9bc-9tvnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0cd009deea4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.309 [INFO][4458] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.362 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" HandleID="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.384 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" HandleID="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000221020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"coredns-668d6bf9bc-9tvnm", "timestamp":"2025-04-30 00:37:08.362150668 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.384 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.406 [INFO][4497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.461 [INFO][4497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.468 [INFO][4497] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.473 [INFO][4497] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.475 [INFO][4497] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.478 [INFO][4497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.478 [INFO][4497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.480 [INFO][4497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2 Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.488 [INFO][4497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.502 [INFO][4497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.2/26] block=192.168.69.0/26 handle="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.502 [INFO][4497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.2/26] handle="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.502 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:08.535985 containerd[1728]: 2025-04-30 00:37:08.502 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.2/26] IPv6=[] ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" HandleID="k8s-pod-network.dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.505 [INFO][4458] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc30609a-e3b4-47b3-9c7c-bb370013278c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"coredns-668d6bf9bc-9tvnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd009deea4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.507 [INFO][4458] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.2/32] ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.507 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cd009deea4 ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.511 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.514 [INFO][4458] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc30609a-e3b4-47b3-9c7c-bb370013278c", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2", Pod:"coredns-668d6bf9bc-9tvnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd009deea4", MAC:"7e:4e:85:ff:1d:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:08.536557 containerd[1728]: 2025-04-30 00:37:08.533 [INFO][4458] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-9tvnm" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:08.551031 containerd[1728]: time="2025-04-30T00:37:08.550917411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ffdd4748-pq8mf,Uid:c64586bc-5bbf-4e31-87de-887039175c19,Namespace:calico-system,Attempt:1,} returns sandbox id \"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe\"" Apr 30 00:37:08.554471 containerd[1728]: time="2025-04-30T00:37:08.554162096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:37:08.566659 containerd[1728]: time="2025-04-30T00:37:08.566565796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:08.566659 containerd[1728]: time="2025-04-30T00:37:08.566625916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:08.566946 containerd[1728]: time="2025-04-30T00:37:08.566641636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:08.566946 containerd[1728]: time="2025-04-30T00:37:08.566747716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:08.585435 systemd[1]: Started cri-containerd-dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2.scope - libcontainer container dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2. Apr 30 00:37:08.614438 containerd[1728]: time="2025-04-30T00:37:08.614380393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9tvnm,Uid:dc30609a-e3b4-47b3-9c7c-bb370013278c,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2\"" Apr 30 00:37:08.619121 containerd[1728]: time="2025-04-30T00:37:08.618604759Z" level=info msg="CreateContainer within sandbox \"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:37:08.686110 containerd[1728]: time="2025-04-30T00:37:08.686058428Z" level=info msg="CreateContainer within sandbox \"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b76856bf243156a1848240cc27e42e58879866def0ee5d5933a7996cd36cce9c\"" Apr 30 00:37:08.687813 containerd[1728]: time="2025-04-30T00:37:08.686663429Z" level=info msg="StartContainer for \"b76856bf243156a1848240cc27e42e58879866def0ee5d5933a7996cd36cce9c\"" Apr 30 00:37:08.713460 systemd[1]: Started cri-containerd-b76856bf243156a1848240cc27e42e58879866def0ee5d5933a7996cd36cce9c.scope - libcontainer container b76856bf243156a1848240cc27e42e58879866def0ee5d5933a7996cd36cce9c. Apr 30 00:37:08.753063 containerd[1728]: time="2025-04-30T00:37:08.752907015Z" level=info msg="StartContainer for \"b76856bf243156a1848240cc27e42e58879866def0ee5d5933a7996cd36cce9c\" returns successfully" Apr 30 00:37:09.287657 kubelet[3188]: I0430 00:37:09.287591 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9tvnm" podStartSLOduration=40.287571153 podStartE2EDuration="40.287571153s" podCreationTimestamp="2025-04-30 00:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:09.272476089 +0000 UTC m=+46.344549676" watchObservedRunningTime="2025-04-30 00:37:09.287571153 +0000 UTC m=+46.359644740" Apr 30 00:37:09.562303 kernel: bpftool[4773]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:37:10.019402 systemd-networkd[1346]: cali323ffda34ca: Gained IPv6LL Apr 30 00:37:10.048787 containerd[1728]: time="2025-04-30T00:37:10.048714335Z" level=info msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.102 [INFO][4803] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.103 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" iface="eth0" netns="/var/run/netns/cni-84ca1da4-860c-d83f-caaf-1c04d91f93cf" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.103 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" iface="eth0" netns="/var/run/netns/cni-84ca1da4-860c-d83f-caaf-1c04d91f93cf" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.103 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" iface="eth0" netns="/var/run/netns/cni-84ca1da4-860c-d83f-caaf-1c04d91f93cf" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.103 [INFO][4803] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.103 [INFO][4803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.126 [INFO][4810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.127 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.127 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.137 [WARNING][4810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.137 [INFO][4810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.139 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:10.141999 containerd[1728]: 2025-04-30 00:37:10.140 [INFO][4803] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:10.144377 containerd[1728]: time="2025-04-30T00:37:10.144328848Z" level=info msg="TearDown network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" successfully" Apr 30 00:37:10.144377 containerd[1728]: time="2025-04-30T00:37:10.144368968Z" level=info msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" returns successfully" Apr 30 00:37:10.145032 containerd[1728]: time="2025-04-30T00:37:10.145000609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-g8pzv,Uid:453b9012-9a51-453e-90a8-b4a35c52c8de,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:37:10.145441 systemd[1]: run-netns-cni\x2d84ca1da4\x2d860c\x2dd83f\x2dcaaf\x2d1c04d91f93cf.mount: Deactivated successfully. Apr 30 00:37:10.531512 systemd-networkd[1346]: cali0cd009deea4: Gained IPv6LL Apr 30 00:37:10.583048 systemd-networkd[1346]: vxlan.calico: Link UP Apr 30 00:37:10.583057 systemd-networkd[1346]: vxlan.calico: Gained carrier Apr 30 00:37:11.022844 systemd-networkd[1346]: cali660221a5887: Link UP Apr 30 00:37:11.023034 systemd-networkd[1346]: cali660221a5887: Gained carrier Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.881 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0 calico-apiserver-7f564694cc- calico-apiserver 453b9012-9a51-453e-90a8-b4a35c52c8de 776 0 2025-04-30 00:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f564694cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 calico-apiserver-7f564694cc-g8pzv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali660221a5887 [] []}} ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.881 [INFO][4848] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.921 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" HandleID="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.933 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" HandleID="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"calico-apiserver-7f564694cc-g8pzv", "timestamp":"2025-04-30 00:37:10.921172735 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.977 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.978 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.978 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.984 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.988 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.995 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:10.997 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.000 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.000 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.003 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471 Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.009 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.018 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.3/26] block=192.168.69.0/26 handle="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.018 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.3/26] handle="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.018 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:11.049453 containerd[1728]: 2025-04-30 00:37:11.018 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.3/26] IPv6=[] ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" HandleID="k8s-pod-network.69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.020 [INFO][4848] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"453b9012-9a51-453e-90a8-b4a35c52c8de", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"calico-apiserver-7f564694cc-g8pzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660221a5887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.020 [INFO][4848] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.3/32] ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.020 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali660221a5887 ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.023 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.024 [INFO][4848] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"453b9012-9a51-453e-90a8-b4a35c52c8de", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471", Pod:"calico-apiserver-7f564694cc-g8pzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660221a5887", MAC:"d6:d6:32:5d:c5:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:11.050185 containerd[1728]: 2025-04-30 00:37:11.046 [INFO][4848] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-g8pzv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:11.158562 containerd[1728]: time="2025-04-30T00:37:11.158441436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:11.158562 containerd[1728]: time="2025-04-30T00:37:11.158508796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:11.158562 containerd[1728]: time="2025-04-30T00:37:11.158519076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:11.158866 containerd[1728]: time="2025-04-30T00:37:11.158712516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:11.192532 systemd[1]: Started cri-containerd-69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471.scope - libcontainer container 69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471. Apr 30 00:37:11.223632 containerd[1728]: time="2025-04-30T00:37:11.223566980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-g8pzv,Uid:453b9012-9a51-453e-90a8-b4a35c52c8de,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471\"" Apr 30 00:37:11.747421 systemd-networkd[1346]: vxlan.calico: Gained IPv6LL Apr 30 00:37:12.725801 containerd[1728]: time="2025-04-30T00:37:12.725754071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:12.731692 containerd[1728]: time="2025-04-30T00:37:12.731647880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:37:12.738294 containerd[1728]: time="2025-04-30T00:37:12.738220531Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:12.749630 containerd[1728]: time="2025-04-30T00:37:12.749555669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:12.750533 containerd[1728]: time="2025-04-30T00:37:12.750389510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 4.196186334s" Apr 30 00:37:12.750533 containerd[1728]: time="2025-04-30T00:37:12.750426670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:37:12.766333 containerd[1728]: time="2025-04-30T00:37:12.766290016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:37:12.772683 systemd-networkd[1346]: cali660221a5887: Gained IPv6LL Apr 30 00:37:12.786091 containerd[1728]: time="2025-04-30T00:37:12.785636967Z" level=info msg="CreateContainer within sandbox \"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:37:12.830560 containerd[1728]: time="2025-04-30T00:37:12.830506759Z" level=info msg="CreateContainer within sandbox \"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c\"" Apr 30 00:37:12.833494 containerd[1728]: time="2025-04-30T00:37:12.833424444Z" level=info msg="StartContainer for \"b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c\"" Apr 30 00:37:12.867476 systemd[1]: Started cri-containerd-b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c.scope - libcontainer container b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c. Apr 30 00:37:12.934647 containerd[1728]: time="2025-04-30T00:37:12.934589246Z" level=info msg="StartContainer for \"b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c\" returns successfully" Apr 30 00:37:14.344125 kubelet[3188]: I0430 00:37:14.344025 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ffdd4748-pq8mf" podStartSLOduration=30.138863319 podStartE2EDuration="34.344004868s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="2025-04-30 00:37:08.552426253 +0000 UTC m=+45.624499840" lastFinishedPulling="2025-04-30 00:37:12.757567802 +0000 UTC m=+49.829641389" observedRunningTime="2025-04-30 00:37:13.304195239 +0000 UTC m=+50.376268826" watchObservedRunningTime="2025-04-30 00:37:14.344004868 +0000 UTC m=+51.416078455" Apr 30 00:37:15.919131 containerd[1728]: time="2025-04-30T00:37:15.918346382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:15.923179 containerd[1728]: time="2025-04-30T00:37:15.923134150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:37:15.927755 containerd[1728]: time="2025-04-30T00:37:15.927695836Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:15.933801 containerd[1728]: time="2025-04-30T00:37:15.933749406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:15.934827 containerd[1728]: time="2025-04-30T00:37:15.934694687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 3.168360311s" Apr 30 00:37:15.934827 containerd[1728]: time="2025-04-30T00:37:15.934735047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:37:15.937240 containerd[1728]: time="2025-04-30T00:37:15.937200611Z" level=info msg="CreateContainer within sandbox \"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:37:15.984090 containerd[1728]: time="2025-04-30T00:37:15.984040201Z" level=info msg="CreateContainer within sandbox \"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0bafb088fe28cc30b90ea0e1ccb5ec6c2d1a77f1e06970914937955c7449c04\"" Apr 30 00:37:15.984784 containerd[1728]: time="2025-04-30T00:37:15.984755002Z" level=info msg="StartContainer for \"b0bafb088fe28cc30b90ea0e1ccb5ec6c2d1a77f1e06970914937955c7449c04\"" Apr 30 00:37:16.016451 systemd[1]: Started cri-containerd-b0bafb088fe28cc30b90ea0e1ccb5ec6c2d1a77f1e06970914937955c7449c04.scope - libcontainer container b0bafb088fe28cc30b90ea0e1ccb5ec6c2d1a77f1e06970914937955c7449c04. Apr 30 00:37:16.052651 containerd[1728]: time="2025-04-30T00:37:16.052598344Z" level=info msg="StartContainer for \"b0bafb088fe28cc30b90ea0e1ccb5ec6c2d1a77f1e06970914937955c7449c04\" returns successfully" Apr 30 00:37:16.295664 kubelet[3188]: I0430 00:37:16.295045 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f564694cc-g8pzv" podStartSLOduration=31.584238761 podStartE2EDuration="36.295024188s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="2025-04-30 00:37:11.224926022 +0000 UTC m=+48.296999609" lastFinishedPulling="2025-04-30 00:37:15.935711449 +0000 UTC m=+53.007785036" observedRunningTime="2025-04-30 00:37:16.293948387 +0000 UTC m=+53.366021974" watchObservedRunningTime="2025-04-30 00:37:16.295024188 +0000 UTC m=+53.367097775" Apr 30 00:37:17.280411 kubelet[3188]: I0430 00:37:17.280375 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:37:18.050229 containerd[1728]: time="2025-04-30T00:37:18.049256384Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.104 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.105 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" iface="eth0" netns="/var/run/netns/cni-524bcce6-e578-c62d-2588-b68e8b80ea87" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.105 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" iface="eth0" netns="/var/run/netns/cni-524bcce6-e578-c62d-2588-b68e8b80ea87" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.105 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" iface="eth0" netns="/var/run/netns/cni-524bcce6-e578-c62d-2588-b68e8b80ea87" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.105 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.105 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.124 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.124 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.125 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.136 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.136 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.138 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:18.141151 containerd[1728]: 2025-04-30 00:37:18.139 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:18.143412 containerd[1728]: time="2025-04-30T00:37:18.143366165Z" level=info msg="TearDown network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" successfully" Apr 30 00:37:18.143412 containerd[1728]: time="2025-04-30T00:37:18.143405805Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" returns successfully" Apr 30 00:37:18.144528 containerd[1728]: time="2025-04-30T00:37:18.144440607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-qjjr5,Uid:7031837f-330f-440e-9066-6afc05e792f8,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:37:18.145202 systemd[1]: run-netns-cni\x2d524bcce6\x2de578\x2dc62d\x2d2588\x2db68e8b80ea87.mount: Deactivated successfully. Apr 30 00:37:18.400238 systemd-networkd[1346]: cali5d367a3b939: Link UP Apr 30 00:37:18.401610 systemd-networkd[1346]: cali5d367a3b939: Gained carrier Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.325 [INFO][5088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0 calico-apiserver-7f564694cc- calico-apiserver 7031837f-330f-440e-9066-6afc05e792f8 812 0 2025-04-30 00:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f564694cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 calico-apiserver-7f564694cc-qjjr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d367a3b939 [] []}} ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.326 [INFO][5088] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.353 [INFO][5099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" HandleID="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.366 [INFO][5099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" HandleID="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"calico-apiserver-7f564694cc-qjjr5", "timestamp":"2025-04-30 00:37:18.353964482 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.366 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.366 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.366 [INFO][5099] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.368 [INFO][5099] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.371 [INFO][5099] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.375 [INFO][5099] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.377 [INFO][5099] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.380 [INFO][5099] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.380 [INFO][5099] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.381 [INFO][5099] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.385 [INFO][5099] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.394 [INFO][5099] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.4/26] block=192.168.69.0/26 handle="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.394 [INFO][5099] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.4/26] handle="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.394 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:18.420026 containerd[1728]: 2025-04-30 00:37:18.394 [INFO][5099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.4/26] IPv6=[] ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" HandleID="k8s-pod-network.6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.396 [INFO][5088] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7031837f-330f-440e-9066-6afc05e792f8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"calico-apiserver-7f564694cc-qjjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d367a3b939", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.396 [INFO][5088] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.4/32] ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.396 [INFO][5088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d367a3b939 ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.398 [INFO][5088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.398 [INFO][5088] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7031837f-330f-440e-9066-6afc05e792f8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f", Pod:"calico-apiserver-7f564694cc-qjjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d367a3b939", MAC:"a2:6d:f0:53:8e:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:18.424246 containerd[1728]: 2025-04-30 00:37:18.416 [INFO][5088] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f" Namespace="calico-apiserver" Pod="calico-apiserver-7f564694cc-qjjr5" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:18.455998 containerd[1728]: time="2025-04-30T00:37:18.455680394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:18.455998 containerd[1728]: time="2025-04-30T00:37:18.455746874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:18.455998 containerd[1728]: time="2025-04-30T00:37:18.455761914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:18.455998 containerd[1728]: time="2025-04-30T00:37:18.455845355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:18.484676 systemd[1]: Started cri-containerd-6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f.scope - libcontainer container 6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f. Apr 30 00:37:18.515459 containerd[1728]: time="2025-04-30T00:37:18.515346764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f564694cc-qjjr5,Uid:7031837f-330f-440e-9066-6afc05e792f8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f\"" Apr 30 00:37:18.519876 containerd[1728]: time="2025-04-30T00:37:18.519750211Z" level=info msg="CreateContainer within sandbox \"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:37:18.574282 containerd[1728]: time="2025-04-30T00:37:18.574217452Z" level=info msg="CreateContainer within sandbox \"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bc00e7495718e259f9dc2d1b2d7babfb857908c6b95282715e0b075db9bd8bd\"" Apr 30 00:37:18.575400 containerd[1728]: time="2025-04-30T00:37:18.575364814Z" level=info msg="StartContainer for \"1bc00e7495718e259f9dc2d1b2d7babfb857908c6b95282715e0b075db9bd8bd\"" Apr 30 00:37:18.603492 systemd[1]: Started cri-containerd-1bc00e7495718e259f9dc2d1b2d7babfb857908c6b95282715e0b075db9bd8bd.scope - libcontainer container 1bc00e7495718e259f9dc2d1b2d7babfb857908c6b95282715e0b075db9bd8bd. Apr 30 00:37:18.640619 containerd[1728]: time="2025-04-30T00:37:18.640571512Z" level=info msg="StartContainer for \"1bc00e7495718e259f9dc2d1b2d7babfb857908c6b95282715e0b075db9bd8bd\" returns successfully" Apr 30 00:37:19.052469 containerd[1728]: time="2025-04-30T00:37:19.051939090Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" iface="eth0" netns="/var/run/netns/cni-61941d83-a567-4f4e-f4ea-4870e4c2bdd4" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" iface="eth0" netns="/var/run/netns/cni-61941d83-a567-4f4e-f4ea-4870e4c2bdd4" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" iface="eth0" netns="/var/run/netns/cni-61941d83-a567-4f4e-f4ea-4870e4c2bdd4" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.112 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.153 [INFO][5218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.153 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.153 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.162 [WARNING][5218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.162 [INFO][5218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.164 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:19.166911 containerd[1728]: 2025-04-30 00:37:19.165 [INFO][5211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:19.170815 systemd[1]: run-netns-cni\x2d61941d83\x2da567\x2d4f4e\x2df4ea\x2d4870e4c2bdd4.mount: Deactivated successfully. Apr 30 00:37:19.171306 containerd[1728]: time="2025-04-30T00:37:19.171150909Z" level=info msg="TearDown network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" successfully" Apr 30 00:37:19.171306 containerd[1728]: time="2025-04-30T00:37:19.171183669Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" returns successfully" Apr 30 00:37:19.172152 containerd[1728]: time="2025-04-30T00:37:19.171823510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srbdg,Uid:7decfb7d-0a1b-482d-a161-616634f85838,Namespace:calico-system,Attempt:1,}" Apr 30 00:37:19.407472 systemd-networkd[1346]: cali5bba1c85f9b: Link UP Apr 30 00:37:19.408192 systemd-networkd[1346]: cali5bba1c85f9b: Gained carrier Apr 30 00:37:19.429341 kubelet[3188]: I0430 00:37:19.429253 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f564694cc-qjjr5" podStartSLOduration=39.429233177 podStartE2EDuration="39.429233177s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:19.313496443 +0000 UTC m=+56.385569990" watchObservedRunningTime="2025-04-30 00:37:19.429233177 +0000 UTC m=+56.501306764" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.272 [INFO][5225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0 csi-node-driver- calico-system 7decfb7d-0a1b-482d-a161-616634f85838 822 0 2025-04-30 00:36:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 csi-node-driver-srbdg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5bba1c85f9b [] []}} ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.273 [INFO][5225] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.311 [INFO][5236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" HandleID="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.334 [INFO][5236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" HandleID="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"csi-node-driver-srbdg", "timestamp":"2025-04-30 00:37:19.31176 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.335 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.335 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.335 [INFO][5236] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.337 [INFO][5236] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.354 [INFO][5236] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.361 [INFO][5236] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.368 [INFO][5236] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.372 [INFO][5236] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.372 [INFO][5236] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.380 [INFO][5236] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.390 [INFO][5236] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.399 [INFO][5236] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.5/26] block=192.168.69.0/26 handle="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.399 [INFO][5236] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.5/26] handle="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.399 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:19.430839 containerd[1728]: 2025-04-30 00:37:19.399 [INFO][5236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.5/26] IPv6=[] ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" HandleID="k8s-pod-network.2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.402 [INFO][5225] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7decfb7d-0a1b-482d-a161-616634f85838", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"csi-node-driver-srbdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5bba1c85f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.403 [INFO][5225] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.5/32] ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.403 [INFO][5225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bba1c85f9b ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.408 [INFO][5225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.409 [INFO][5225] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7decfb7d-0a1b-482d-a161-616634f85838", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f", Pod:"csi-node-driver-srbdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5bba1c85f9b", MAC:"4e:8d:0b:76:43:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:19.431870 containerd[1728]: 2025-04-30 00:37:19.426 [INFO][5225] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f" Namespace="calico-system" Pod="csi-node-driver-srbdg" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:19.460116 containerd[1728]: time="2025-04-30T00:37:19.459675183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:19.460116 containerd[1728]: time="2025-04-30T00:37:19.460061583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:19.460116 containerd[1728]: time="2025-04-30T00:37:19.460074503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:19.461470 containerd[1728]: time="2025-04-30T00:37:19.460167743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:19.491557 systemd[1]: Started cri-containerd-2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f.scope - libcontainer container 2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f. Apr 30 00:37:19.527482 containerd[1728]: time="2025-04-30T00:37:19.527376444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srbdg,Uid:7decfb7d-0a1b-482d-a161-616634f85838,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f\"" Apr 30 00:37:19.529540 containerd[1728]: time="2025-04-30T00:37:19.529508208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:37:19.747561 systemd-networkd[1346]: cali5d367a3b939: Gained IPv6LL Apr 30 00:37:20.049929 containerd[1728]: time="2025-04-30T00:37:20.049651709Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.103 [INFO][5312] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.104 [INFO][5312] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" iface="eth0" netns="/var/run/netns/cni-71fab8a2-bbdd-f1a6-4b48-a5f7ff404beb" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.104 [INFO][5312] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" iface="eth0" netns="/var/run/netns/cni-71fab8a2-bbdd-f1a6-4b48-a5f7ff404beb" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.104 [INFO][5312] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" iface="eth0" netns="/var/run/netns/cni-71fab8a2-bbdd-f1a6-4b48-a5f7ff404beb" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.105 [INFO][5312] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.105 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.131 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.131 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.132 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.142 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.142 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.145 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:20.149380 containerd[1728]: 2025-04-30 00:37:20.148 [INFO][5312] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:20.150036 containerd[1728]: time="2025-04-30T00:37:20.149390659Z" level=info msg="TearDown network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" successfully" Apr 30 00:37:20.150036 containerd[1728]: time="2025-04-30T00:37:20.149421139Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" returns successfully" Apr 30 00:37:20.153400 containerd[1728]: time="2025-04-30T00:37:20.152477464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5pklv,Uid:0e5b089a-7bcf-4e20-a4b2-4262bc369176,Namespace:kube-system,Attempt:1,}" Apr 30 00:37:20.155609 systemd[1]: run-netns-cni\x2d71fab8a2\x2dbbdd\x2df1a6\x2d4b48\x2da5f7ff404beb.mount: Deactivated successfully. Apr 30 00:37:20.298729 kubelet[3188]: I0430 00:37:20.298679 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:37:20.312919 systemd-networkd[1346]: cali22d0bcaa90a: Link UP Apr 30 00:37:20.314043 systemd-networkd[1346]: cali22d0bcaa90a: Gained carrier Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.236 [INFO][5326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0 coredns-668d6bf9bc- kube-system 0e5b089a-7bcf-4e20-a4b2-4262bc369176 832 0 2025-04-30 00:36:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-cee67ba5b3 coredns-668d6bf9bc-5pklv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22d0bcaa90a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.236 [INFO][5326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.263 [INFO][5337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" HandleID="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.274 [INFO][5337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" HandleID="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319530), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-cee67ba5b3", "pod":"coredns-668d6bf9bc-5pklv", "timestamp":"2025-04-30 00:37:20.26304683 +0000 UTC"}, Hostname:"ci-4081.3.3-a-cee67ba5b3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.274 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.274 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.274 [INFO][5337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-cee67ba5b3' Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.276 [INFO][5337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.280 [INFO][5337] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.284 [INFO][5337] ipam/ipam.go 489: Trying affinity for 192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.286 [INFO][5337] ipam/ipam.go 155: Attempting to load block cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.288 [INFO][5337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.288 [INFO][5337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.289 [INFO][5337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996 Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.295 [INFO][5337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.308 [INFO][5337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.69.6/26] block=192.168.69.0/26 handle="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.308 [INFO][5337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.69.6/26] handle="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" host="ci-4081.3.3-a-cee67ba5b3" Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.308 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:20.331427 containerd[1728]: 2025-04-30 00:37:20.308 [INFO][5337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.6/26] IPv6=[] ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" HandleID="k8s-pod-network.d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.310 [INFO][5326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e5b089a-7bcf-4e20-a4b2-4262bc369176", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"", Pod:"coredns-668d6bf9bc-5pklv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22d0bcaa90a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.310 [INFO][5326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.69.6/32] ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.310 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22d0bcaa90a ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.312 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.313 [INFO][5326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e5b089a-7bcf-4e20-a4b2-4262bc369176", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996", Pod:"coredns-668d6bf9bc-5pklv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22d0bcaa90a", MAC:"32:cb:64:1a:cb:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:20.331960 containerd[1728]: 2025-04-30 00:37:20.329 [INFO][5326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996" Namespace="kube-system" Pod="coredns-668d6bf9bc-5pklv" WorkloadEndpoint="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:20.355475 containerd[1728]: time="2025-04-30T00:37:20.355364608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:20.355475 containerd[1728]: time="2025-04-30T00:37:20.355429008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:20.355475 containerd[1728]: time="2025-04-30T00:37:20.355443568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:20.355767 containerd[1728]: time="2025-04-30T00:37:20.355523369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:20.377453 systemd[1]: Started cri-containerd-d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996.scope - libcontainer container d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996. Apr 30 00:37:20.408078 containerd[1728]: time="2025-04-30T00:37:20.407739167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5pklv,Uid:0e5b089a-7bcf-4e20-a4b2-4262bc369176,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996\"" Apr 30 00:37:20.414670 containerd[1728]: time="2025-04-30T00:37:20.414340377Z" level=info msg="CreateContainer within sandbox \"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:37:20.474985 containerd[1728]: time="2025-04-30T00:37:20.474940828Z" level=info msg="CreateContainer within sandbox \"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13fa9294e1a9457bc3caed435869bd8bdde392950a09d025eb5a0b27893f71f7\"" Apr 30 00:37:20.476705 containerd[1728]: time="2025-04-30T00:37:20.475887829Z" level=info msg="StartContainer for \"13fa9294e1a9457bc3caed435869bd8bdde392950a09d025eb5a0b27893f71f7\"" Apr 30 00:37:20.502475 systemd[1]: Started cri-containerd-13fa9294e1a9457bc3caed435869bd8bdde392950a09d025eb5a0b27893f71f7.scope - libcontainer container 13fa9294e1a9457bc3caed435869bd8bdde392950a09d025eb5a0b27893f71f7. Apr 30 00:37:20.537691 containerd[1728]: time="2025-04-30T00:37:20.537562962Z" level=info msg="StartContainer for \"13fa9294e1a9457bc3caed435869bd8bdde392950a09d025eb5a0b27893f71f7\" returns successfully" Apr 30 00:37:20.643446 systemd-networkd[1346]: cali5bba1c85f9b: Gained IPv6LL Apr 30 00:37:21.304384 containerd[1728]: time="2025-04-30T00:37:21.304228474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:21.306922 containerd[1728]: time="2025-04-30T00:37:21.306880798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:37:21.311236 containerd[1728]: time="2025-04-30T00:37:21.311183484Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:21.325670 kubelet[3188]: I0430 00:37:21.325560 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5pklv" podStartSLOduration=52.325539986 podStartE2EDuration="52.325539986s" podCreationTimestamp="2025-04-30 00:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:21.323496183 +0000 UTC m=+58.395569770" watchObservedRunningTime="2025-04-30 00:37:21.325539986 +0000 UTC m=+58.397613533" Apr 30 00:37:21.327000 containerd[1728]: time="2025-04-30T00:37:21.326121387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.796576899s" Apr 30 00:37:21.327000 containerd[1728]: time="2025-04-30T00:37:21.326162427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:37:21.332060 containerd[1728]: time="2025-04-30T00:37:21.331799835Z" level=info msg="CreateContainer within sandbox \"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:37:21.337119 containerd[1728]: time="2025-04-30T00:37:21.337025363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:21.398848 containerd[1728]: time="2025-04-30T00:37:21.398802056Z" level=info msg="CreateContainer within sandbox \"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c601b3d0225f8f90b5308a803f572da8451cd3f7fde11f89cae9daab4af9be9a\"" Apr 30 00:37:21.403695 containerd[1728]: time="2025-04-30T00:37:21.402790902Z" level=info msg="StartContainer for \"c601b3d0225f8f90b5308a803f572da8451cd3f7fde11f89cae9daab4af9be9a\"" Apr 30 00:37:21.449513 systemd[1]: Started cri-containerd-c601b3d0225f8f90b5308a803f572da8451cd3f7fde11f89cae9daab4af9be9a.scope - libcontainer container c601b3d0225f8f90b5308a803f572da8451cd3f7fde11f89cae9daab4af9be9a. Apr 30 00:37:21.481359 containerd[1728]: time="2025-04-30T00:37:21.481304900Z" level=info msg="StartContainer for \"c601b3d0225f8f90b5308a803f572da8451cd3f7fde11f89cae9daab4af9be9a\" returns successfully" Apr 30 00:37:21.482617 containerd[1728]: time="2025-04-30T00:37:21.482580582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:37:21.667460 systemd-networkd[1346]: cali22d0bcaa90a: Gained IPv6LL Apr 30 00:37:23.096805 containerd[1728]: time="2025-04-30T00:37:23.096759087Z" level=info msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.161 [WARNING][5504] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0", GenerateName:"calico-kube-controllers-6ffdd4748-", Namespace:"calico-system", SelfLink:"", UID:"c64586bc-5bbf-4e31-87de-887039175c19", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ffdd4748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe", Pod:"calico-kube-controllers-6ffdd4748-pq8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323ffda34ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.162 [INFO][5504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.162 [INFO][5504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" iface="eth0" netns="" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.162 [INFO][5504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.162 [INFO][5504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.190 [INFO][5512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.190 [INFO][5512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.190 [INFO][5512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.200 [WARNING][5512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.201 [INFO][5512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.203 [INFO][5512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.206805 containerd[1728]: 2025-04-30 00:37:23.205 [INFO][5504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.207860 containerd[1728]: time="2025-04-30T00:37:23.207356373Z" level=info msg="TearDown network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" successfully" Apr 30 00:37:23.207860 containerd[1728]: time="2025-04-30T00:37:23.207394653Z" level=info msg="StopPodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" returns successfully" Apr 30 00:37:23.208012 containerd[1728]: time="2025-04-30T00:37:23.207979094Z" level=info msg="RemovePodSandbox for \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" Apr 30 00:37:23.208056 containerd[1728]: time="2025-04-30T00:37:23.208014574Z" level=info msg="Forcibly stopping sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\"" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.253 [WARNING][5531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0", GenerateName:"calico-kube-controllers-6ffdd4748-", Namespace:"calico-system", SelfLink:"", UID:"c64586bc-5bbf-4e31-87de-887039175c19", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ffdd4748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"c284f240a00f5ec26c1d3037493bf577d1f88d104e4863b9fea2553a230512fe", Pod:"calico-kube-controllers-6ffdd4748-pq8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali323ffda34ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.254 [INFO][5531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.254 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" iface="eth0" netns="" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.254 [INFO][5531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.254 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.277 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.277 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.277 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.287 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.287 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" HandleID="k8s-pod-network.4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--kube--controllers--6ffdd4748--pq8mf-eth0" Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.289 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.292477 containerd[1728]: 2025-04-30 00:37:23.291 [INFO][5531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700" Apr 30 00:37:23.292890 containerd[1728]: time="2025-04-30T00:37:23.292544581Z" level=info msg="TearDown network for sandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" successfully" Apr 30 00:37:23.298105 containerd[1728]: time="2025-04-30T00:37:23.298057069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:23.315234 containerd[1728]: time="2025-04-30T00:37:23.314431654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:23.315955 containerd[1728]: time="2025-04-30T00:37:23.315915576Z" level=info msg="RemovePodSandbox \"4f8d96de24d64d3a5268d9cff0171a8c7fde4483c3353bb36211c259faf4d700\" returns successfully" Apr 30 00:37:23.316794 containerd[1728]: time="2025-04-30T00:37:23.316763257Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:37:23.319817 containerd[1728]: time="2025-04-30T00:37:23.319741102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:37:23.325285 containerd[1728]: time="2025-04-30T00:37:23.323809708Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:23.334750 containerd[1728]: time="2025-04-30T00:37:23.334700164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:23.335789 containerd[1728]: time="2025-04-30T00:37:23.335663566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.853043784s" Apr 30 00:37:23.335789 containerd[1728]: time="2025-04-30T00:37:23.335700446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:37:23.340027 containerd[1728]: time="2025-04-30T00:37:23.339990852Z" level=info msg="CreateContainer within sandbox \"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:37:23.385781 containerd[1728]: time="2025-04-30T00:37:23.385606201Z" level=info msg="CreateContainer within sandbox \"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"32543486a161c7602f510626651e46b09217da1fd4be20433b206d0e9773e1e3\"" Apr 30 00:37:23.391634 containerd[1728]: time="2025-04-30T00:37:23.390426848Z" level=info msg="StartContainer for \"32543486a161c7602f510626651e46b09217da1fd4be20433b206d0e9773e1e3\"" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.372 [WARNING][5562] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7decfb7d-0a1b-482d-a161-616634f85838", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f", Pod:"csi-node-driver-srbdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5bba1c85f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.373 [INFO][5562] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.374 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" iface="eth0" netns="" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.374 [INFO][5562] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.374 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.397 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.398 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.398 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.425 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.425 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.428 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.432434 containerd[1728]: 2025-04-30 00:37:23.430 [INFO][5562] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.432434 containerd[1728]: time="2025-04-30T00:37:23.432098671Z" level=info msg="TearDown network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" successfully" Apr 30 00:37:23.432434 containerd[1728]: time="2025-04-30T00:37:23.432124071Z" level=info msg="StopPodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" returns successfully" Apr 30 00:37:23.433115 containerd[1728]: time="2025-04-30T00:37:23.432990792Z" level=info msg="RemovePodSandbox for \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:37:23.433115 containerd[1728]: time="2025-04-30T00:37:23.433019832Z" level=info msg="Forcibly stopping sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\"" Apr 30 00:37:23.436424 systemd[1]: Started cri-containerd-32543486a161c7602f510626651e46b09217da1fd4be20433b206d0e9773e1e3.scope - libcontainer container 32543486a161c7602f510626651e46b09217da1fd4be20433b206d0e9773e1e3. Apr 30 00:37:23.488253 containerd[1728]: time="2025-04-30T00:37:23.488094275Z" level=info msg="StartContainer for \"32543486a161c7602f510626651e46b09217da1fd4be20433b206d0e9773e1e3\" returns successfully" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.507 [WARNING][5613] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7decfb7d-0a1b-482d-a161-616634f85838", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"2a287b652f67d49f35aede4f2c09ae1664bf532d126d1cb46b88c8a353595d3f", Pod:"csi-node-driver-srbdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5bba1c85f9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.508 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.508 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" iface="eth0" netns="" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.508 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.508 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.527 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.528 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.528 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.537 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.537 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" HandleID="k8s-pod-network.e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-csi--node--driver--srbdg-eth0" Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.538 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.541632 containerd[1728]: 2025-04-30 00:37:23.540 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f" Apr 30 00:37:23.543249 containerd[1728]: time="2025-04-30T00:37:23.542099676Z" level=info msg="TearDown network for sandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" successfully" Apr 30 00:37:23.550825 containerd[1728]: time="2025-04-30T00:37:23.550747929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:23.550825 containerd[1728]: time="2025-04-30T00:37:23.550821969Z" level=info msg="RemovePodSandbox \"e64cdcb753a0e7a0ddf6405f4f3d381d464997e3810960bc0e5b09112a7aa74f\" returns successfully" Apr 30 00:37:23.551838 containerd[1728]: time="2025-04-30T00:37:23.551548010Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.589 [WARNING][5650] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e5b089a-7bcf-4e20-a4b2-4262bc369176", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996", Pod:"coredns-668d6bf9bc-5pklv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22d0bcaa90a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.589 [INFO][5650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.589 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" iface="eth0" netns="" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.589 [INFO][5650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.589 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.610 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.610 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.610 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.619 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.619 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.622 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.625939 containerd[1728]: 2025-04-30 00:37:23.624 [INFO][5650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.626603 containerd[1728]: time="2025-04-30T00:37:23.626079482Z" level=info msg="TearDown network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" successfully" Apr 30 00:37:23.626603 containerd[1728]: time="2025-04-30T00:37:23.626112442Z" level=info msg="StopPodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" returns successfully" Apr 30 00:37:23.627021 containerd[1728]: time="2025-04-30T00:37:23.626947163Z" level=info msg="RemovePodSandbox for \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:37:23.627021 containerd[1728]: time="2025-04-30T00:37:23.627014683Z" level=info msg="Forcibly stopping sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\"" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.668 [WARNING][5675] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0e5b089a-7bcf-4e20-a4b2-4262bc369176", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"d5322666f79e4863b909d9f2e083ffccee3d0282731ec3e65c6fcd34aaee5996", Pod:"coredns-668d6bf9bc-5pklv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22d0bcaa90a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.668 [INFO][5675] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.668 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" iface="eth0" netns="" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.668 [INFO][5675] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.668 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.690 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.690 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.690 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.699 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.699 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" HandleID="k8s-pod-network.010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--5pklv-eth0" Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.700 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.703564 containerd[1728]: 2025-04-30 00:37:23.702 [INFO][5675] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8" Apr 30 00:37:23.704119 containerd[1728]: time="2025-04-30T00:37:23.703585118Z" level=info msg="TearDown network for sandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" successfully" Apr 30 00:37:23.731419 containerd[1728]: time="2025-04-30T00:37:23.731369480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:23.731528 containerd[1728]: time="2025-04-30T00:37:23.731482680Z" level=info msg="RemovePodSandbox \"010ffd651c0a55d099cba8d868ef727eb0ba195561207020a393b916565f4fe8\" returns successfully" Apr 30 00:37:23.732180 containerd[1728]: time="2025-04-30T00:37:23.731931721Z" level=info msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.770 [WARNING][5701] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"453b9012-9a51-453e-90a8-b4a35c52c8de", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471", Pod:"calico-apiserver-7f564694cc-g8pzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660221a5887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.770 [INFO][5701] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.770 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" iface="eth0" netns="" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.770 [INFO][5701] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.770 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.791 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.793 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.793 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.802 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.802 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.804 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.807336 containerd[1728]: 2025-04-30 00:37:23.805 [INFO][5701] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.807831 containerd[1728]: time="2025-04-30T00:37:23.807375234Z" level=info msg="TearDown network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" successfully" Apr 30 00:37:23.807831 containerd[1728]: time="2025-04-30T00:37:23.807401274Z" level=info msg="StopPodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" returns successfully" Apr 30 00:37:23.808014 containerd[1728]: time="2025-04-30T00:37:23.807988155Z" level=info msg="RemovePodSandbox for \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" Apr 30 00:37:23.808056 containerd[1728]: time="2025-04-30T00:37:23.808021795Z" level=info msg="Forcibly stopping sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\"" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.846 [WARNING][5727] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"453b9012-9a51-453e-90a8-b4a35c52c8de", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"69fdfebd3e4e1e51e1208a2b84c66e776341107cfea337af13bfada883308471", Pod:"calico-apiserver-7f564694cc-g8pzv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660221a5887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.846 [INFO][5727] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.846 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" iface="eth0" netns="" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.846 [INFO][5727] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.846 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.865 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.865 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.865 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.875 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.875 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" HandleID="k8s-pod-network.75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--g8pzv-eth0" Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.877 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.880484 containerd[1728]: 2025-04-30 00:37:23.879 [INFO][5727] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13" Apr 30 00:37:23.886539 containerd[1728]: time="2025-04-30T00:37:23.886481075Z" level=info msg="TearDown network for sandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" successfully" Apr 30 00:37:23.898839 containerd[1728]: time="2025-04-30T00:37:23.898796014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:23.898922 containerd[1728]: time="2025-04-30T00:37:23.898879294Z" level=info msg="RemovePodSandbox \"75a29c74af9bacfab4ed824cdddc9f4ea085903b47f1376d378f86901c96cb13\" returns successfully" Apr 30 00:37:23.899415 containerd[1728]: time="2025-04-30T00:37:23.899387575Z" level=info msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.935 [WARNING][5752] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc30609a-e3b4-47b3-9c7c-bb370013278c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2", Pod:"coredns-668d6bf9bc-9tvnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd009deea4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.935 [INFO][5752] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.935 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" iface="eth0" netns="" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.935 [INFO][5752] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.935 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.954 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.954 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.954 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.963 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.963 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.965 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:23.968640 containerd[1728]: 2025-04-30 00:37:23.967 [INFO][5752] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:23.968640 containerd[1728]: time="2025-04-30T00:37:23.968624201Z" level=info msg="TearDown network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" successfully" Apr 30 00:37:23.969388 containerd[1728]: time="2025-04-30T00:37:23.968650041Z" level=info msg="StopPodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" returns successfully" Apr 30 00:37:23.970156 containerd[1728]: time="2025-04-30T00:37:23.969778322Z" level=info msg="RemovePodSandbox for \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" Apr 30 00:37:23.970156 containerd[1728]: time="2025-04-30T00:37:23.969814042Z" level=info msg="Forcibly stopping sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\"" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.010 [WARNING][5777] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc30609a-e3b4-47b3-9c7c-bb370013278c", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"dbcef5264b71c4ec76a9f424aa921b97de7ed60c9ec1238aba153b4fc61a0bf2", Pod:"coredns-668d6bf9bc-9tvnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd009deea4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.010 [INFO][5777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.010 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" iface="eth0" netns="" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.010 [INFO][5777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.010 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.030 [INFO][5785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.030 [INFO][5785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.030 [INFO][5785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.041 [WARNING][5785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.041 [INFO][5785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" HandleID="k8s-pod-network.6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-coredns--668d6bf9bc--9tvnm-eth0" Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.043 [INFO][5785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:24.046277 containerd[1728]: 2025-04-30 00:37:24.044 [INFO][5777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d" Apr 30 00:37:24.046680 containerd[1728]: time="2025-04-30T00:37:24.046301759Z" level=info msg="TearDown network for sandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" successfully" Apr 30 00:37:24.055203 containerd[1728]: time="2025-04-30T00:37:24.055141893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:24.055422 containerd[1728]: time="2025-04-30T00:37:24.055215973Z" level=info msg="RemovePodSandbox \"6d33e549b21b2a48b0d2b3cde903023093fe73198ff28dfd3f64ba114b26a74d\" returns successfully" Apr 30 00:37:24.055729 containerd[1728]: time="2025-04-30T00:37:24.055702214Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.092 [WARNING][5804] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7031837f-330f-440e-9066-6afc05e792f8", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f", Pod:"calico-apiserver-7f564694cc-qjjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d367a3b939", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.093 [INFO][5804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.093 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" iface="eth0" netns="" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.093 [INFO][5804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.093 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.112 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.112 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.112 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.123 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.123 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.125 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:24.130130 containerd[1728]: 2025-04-30 00:37:24.127 [INFO][5804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.131381 containerd[1728]: time="2025-04-30T00:37:24.130251367Z" level=info msg="TearDown network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" successfully" Apr 30 00:37:24.131381 containerd[1728]: time="2025-04-30T00:37:24.130485608Z" level=info msg="StopPodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" returns successfully" Apr 30 00:37:24.131381 containerd[1728]: time="2025-04-30T00:37:24.131019289Z" level=info msg="RemovePodSandbox for \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:37:24.131381 containerd[1728]: time="2025-04-30T00:37:24.131048049Z" level=info msg="Forcibly stopping sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\"" Apr 30 00:37:24.185511 kubelet[3188]: I0430 00:37:24.185315 3188 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:37:24.185511 kubelet[3188]: I0430 00:37:24.185359 3188 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.175 [WARNING][5829] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0", GenerateName:"calico-apiserver-7f564694cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7031837f-330f-440e-9066-6afc05e792f8", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f564694cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-cee67ba5b3", ContainerID:"6232633b4405db046853d0c3f4773ff1ff675606de0c44af4301de30c6e5618f", Pod:"calico-apiserver-7f564694cc-qjjr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d367a3b939", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.175 [INFO][5829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.175 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" iface="eth0" netns="" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.175 [INFO][5829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.175 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.202 [INFO][5836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.202 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.202 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.213 [WARNING][5836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.213 [INFO][5836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" HandleID="k8s-pod-network.541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Workload="ci--4081.3.3--a--cee67ba5b3-k8s-calico--apiserver--7f564694cc--qjjr5-eth0" Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.220 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:37:24.225618 containerd[1728]: 2025-04-30 00:37:24.223 [INFO][5829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b" Apr 30 00:37:24.225618 containerd[1728]: time="2025-04-30T00:37:24.225424753Z" level=info msg="TearDown network for sandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" successfully" Apr 30 00:37:24.241604 containerd[1728]: time="2025-04-30T00:37:24.241048897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:37:24.241604 containerd[1728]: time="2025-04-30T00:37:24.241187697Z" level=info msg="RemovePodSandbox \"541a61bbe3cf140d148d133e6f3902fc397a9b9c357042c4dde6f947293d002b\" returns successfully" Apr 30 00:37:38.269681 systemd[1]: run-containerd-runc-k8s.io-e32736b2b62c42ba8fd4813e15defd5f7fea9a458290de6cf189b605d1cecd8f-runc.Zz62Kx.mount: Deactivated successfully. Apr 30 00:37:38.330978 kubelet[3188]: I0430 00:37:38.329497 3188 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-srbdg" podStartSLOduration=54.521580555 podStartE2EDuration="58.329480956s" podCreationTimestamp="2025-04-30 00:36:40 +0000 UTC" firstStartedPulling="2025-04-30 00:37:19.529075607 +0000 UTC m=+56.601149194" lastFinishedPulling="2025-04-30 00:37:23.336976008 +0000 UTC m=+60.409049595" observedRunningTime="2025-04-30 00:37:24.346819418 +0000 UTC m=+61.418893005" watchObservedRunningTime="2025-04-30 00:37:38.329480956 +0000 UTC m=+75.401554543" Apr 30 00:37:46.594167 kubelet[3188]: I0430 00:37:46.594061 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:37:48.592870 kubelet[3188]: I0430 00:37:48.590834 3188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:03.432903 systemd[1]: run-containerd-runc-k8s.io-b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c-runc.4oi8y8.mount: Deactivated successfully. Apr 30 00:38:14.291806 systemd[1]: run-containerd-runc-k8s.io-b616e458069097d19585678b1973dd90424df3da97ca5c87d69a0ec3fe2e877c-runc.bJrLtY.mount: Deactivated successfully. Apr 30 00:38:14.565025 update_engine[1699]: I20250430 00:38:14.564615 1699 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 00:38:14.565025 update_engine[1699]: I20250430 00:38:14.564687 1699 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 00:38:14.565025 update_engine[1699]: I20250430 00:38:14.564913 1699 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 00:38:14.565684 update_engine[1699]: I20250430 00:38:14.565652 1699 omaha_request_params.cc:62] Current group set to lts Apr 30 00:38:14.565883 update_engine[1699]: I20250430 00:38:14.565758 1699 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 00:38:14.565883 update_engine[1699]: I20250430 00:38:14.565773 1699 update_attempter.cc:643] Scheduling an action processor start. Apr 30 00:38:14.565883 update_engine[1699]: I20250430 00:38:14.565789 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:38:14.566125 locksmithd[1756]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 00:38:14.566739 update_engine[1699]: I20250430 00:38:14.566690 1699 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 00:38:14.566827 update_engine[1699]: I20250430 00:38:14.566792 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:38:14.566827 update_engine[1699]: I20250430 00:38:14.566815 1699 omaha_request_action.cc:272] Request: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.566827 update_engine[1699]: Apr 30 00:38:14.567033 update_engine[1699]: I20250430 00:38:14.566830 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:38:14.569179 update_engine[1699]: I20250430 00:38:14.569151 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:38:14.569506 update_engine[1699]: I20250430 00:38:14.569479 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:38:14.592496 update_engine[1699]: E20250430 00:38:14.592437 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:38:14.592620 update_engine[1699]: I20250430 00:38:14.592538 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 00:38:24.526360 update_engine[1699]: I20250430 00:38:24.526294 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:38:24.526749 update_engine[1699]: I20250430 00:38:24.526515 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:38:24.526777 update_engine[1699]: I20250430 00:38:24.526744 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:38:24.575249 update_engine[1699]: E20250430 00:38:24.575186 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:38:24.575399 update_engine[1699]: I20250430 00:38:24.575296 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 00:38:34.521120 update_engine[1699]: I20250430 00:38:34.521045 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:38:34.521516 update_engine[1699]: I20250430 00:38:34.521301 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:38:34.521545 update_engine[1699]: I20250430 00:38:34.521524 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:38:34.590192 update_engine[1699]: E20250430 00:38:34.590130 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:38:34.590341 update_engine[1699]: I20250430 00:38:34.590220 1699 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 00:38:44.522243 update_engine[1699]: I20250430 00:38:44.522162 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:38:44.522760 update_engine[1699]: I20250430 00:38:44.522432 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:38:44.522760 update_engine[1699]: I20250430 00:38:44.522663 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:38:44.807778 update_engine[1699]: E20250430 00:38:44.807626 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:38:44.807778 update_engine[1699]: I20250430 00:38:44.807720 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:38:44.807778 update_engine[1699]: I20250430 00:38:44.807730 1699 omaha_request_action.cc:617] Omaha request response: Apr 30 00:38:44.807940 update_engine[1699]: E20250430 00:38:44.807821 1699 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807839 1699 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807845 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807850 1699 update_attempter.cc:306] Processing Done. Apr 30 00:38:44.807940 update_engine[1699]: E20250430 00:38:44.807866 1699 update_attempter.cc:619] Update failed. Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807871 1699 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807883 1699 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 00:38:44.807940 update_engine[1699]: I20250430 00:38:44.807888 1699 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 00:38:44.808109 update_engine[1699]: I20250430 00:38:44.807976 1699 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:38:44.808109 update_engine[1699]: I20250430 00:38:44.808001 1699 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:38:44.808109 update_engine[1699]: I20250430 00:38:44.808007 1699 omaha_request_action.cc:272] Request: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: Apr 30 00:38:44.808109 update_engine[1699]: I20250430 00:38:44.808013 1699 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:38:44.808336 update_engine[1699]: I20250430 00:38:44.808168 1699 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:38:44.808862 update_engine[1699]: I20250430 00:38:44.808449 1699 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:38:44.808934 locksmithd[1756]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 00:38:44.818227 update_engine[1699]: E20250430 00:38:44.818172 1699 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818256 1699 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818288 1699 omaha_request_action.cc:617] Omaha request response: Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818295 1699 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818300 1699 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818305 1699 update_attempter.cc:306] Processing Done. Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818311 1699 update_attempter.cc:310] Error event sent. Apr 30 00:38:44.818351 update_engine[1699]: I20250430 00:38:44.818321 1699 update_check_scheduler.cc:74] Next update check in 48m27s Apr 30 00:38:44.818656 locksmithd[1756]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 00:39:05.586574 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:44276.service - OpenSSH per-connection server daemon (10.200.16.10:44276). Apr 30 00:39:06.073240 sshd[6069]: Accepted publickey for core from 10.200.16.10 port 44276 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:06.077162 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:06.081623 systemd-logind[1694]: New session 10 of user core. Apr 30 00:39:06.087450 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:39:06.505158 sshd[6069]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:06.508376 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:39:06.509335 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:44276.service: Deactivated successfully. Apr 30 00:39:06.511481 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:39:06.512552 systemd-logind[1694]: Removed session 10. Apr 30 00:39:11.595577 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:50168.service - OpenSSH per-connection server daemon (10.200.16.10:50168). Apr 30 00:39:12.069026 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 50168 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:12.070520 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:12.075165 systemd-logind[1694]: New session 11 of user core. Apr 30 00:39:12.079466 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:39:12.478171 sshd[6102]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:12.482636 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:39:12.483761 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:50168.service: Deactivated successfully. Apr 30 00:39:12.485983 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:39:12.487216 systemd-logind[1694]: Removed session 11. Apr 30 00:39:17.566215 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:50170.service - OpenSSH per-connection server daemon (10.200.16.10:50170). Apr 30 00:39:18.047811 sshd[6136]: Accepted publickey for core from 10.200.16.10 port 50170 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:18.049505 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:18.053519 systemd-logind[1694]: New session 12 of user core. Apr 30 00:39:18.059443 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:39:18.455012 sshd[6136]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:18.460682 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:50170.service: Deactivated successfully. Apr 30 00:39:18.462529 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:39:18.463940 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:39:18.465359 systemd-logind[1694]: Removed session 12. Apr 30 00:39:18.555573 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:50178.service - OpenSSH per-connection server daemon (10.200.16.10:50178). Apr 30 00:39:19.032764 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 50178 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:19.034299 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:19.039908 systemd-logind[1694]: New session 13 of user core. Apr 30 00:39:19.046460 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:39:19.481843 sshd[6150]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:19.485729 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:50178.service: Deactivated successfully. Apr 30 00:39:19.488189 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:39:19.489583 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:39:19.490832 systemd-logind[1694]: Removed session 13. Apr 30 00:39:19.582549 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:54712.service - OpenSSH per-connection server daemon (10.200.16.10:54712). Apr 30 00:39:20.053954 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 54712 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:20.055456 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:20.060853 systemd-logind[1694]: New session 14 of user core. Apr 30 00:39:20.067457 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:39:20.462864 sshd[6161]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:20.466564 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:54712.service: Deactivated successfully. Apr 30 00:39:20.470012 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:39:20.470952 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:39:20.472160 systemd-logind[1694]: Removed session 14. Apr 30 00:39:25.545309 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:54726.service - OpenSSH per-connection server daemon (10.200.16.10:54726). Apr 30 00:39:25.990958 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 54726 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:25.992510 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:25.996443 systemd-logind[1694]: New session 15 of user core. Apr 30 00:39:26.005463 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:39:26.378577 sshd[6180]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:26.390083 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:54726.service: Deactivated successfully. Apr 30 00:39:26.395522 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:39:26.396883 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:39:26.398492 systemd-logind[1694]: Removed session 15. Apr 30 00:39:31.460001 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:47058.service - OpenSSH per-connection server daemon (10.200.16.10:47058). Apr 30 00:39:31.909097 sshd[6194]: Accepted publickey for core from 10.200.16.10 port 47058 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:31.910502 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:31.914564 systemd-logind[1694]: New session 16 of user core. Apr 30 00:39:31.922705 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:39:32.297521 sshd[6194]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:32.301252 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:47058.service: Deactivated successfully. Apr 30 00:39:32.303188 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:39:32.303935 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:39:32.304967 systemd-logind[1694]: Removed session 16. Apr 30 00:39:37.379476 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:47062.service - OpenSSH per-connection server daemon (10.200.16.10:47062). Apr 30 00:39:37.825424 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 47062 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:37.827436 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:37.832041 systemd-logind[1694]: New session 17 of user core. Apr 30 00:39:37.838444 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:39:38.218866 sshd[6207]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:38.222219 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:47062.service: Deactivated successfully. Apr 30 00:39:38.224830 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:39:38.230425 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:39:38.232980 systemd-logind[1694]: Removed session 17. Apr 30 00:39:43.304829 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:46348.service - OpenSSH per-connection server daemon (10.200.16.10:46348). Apr 30 00:39:43.783878 sshd[6241]: Accepted publickey for core from 10.200.16.10 port 46348 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:43.785606 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:43.789660 systemd-logind[1694]: New session 18 of user core. Apr 30 00:39:43.794450 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:39:44.194013 sshd[6241]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:44.197217 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:39:44.197217 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:39:44.199193 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:46348.service: Deactivated successfully. Apr 30 00:39:44.291804 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:46352.service - OpenSSH per-connection server daemon (10.200.16.10:46352). Apr 30 00:39:44.767890 sshd[6259]: Accepted publickey for core from 10.200.16.10 port 46352 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:44.769409 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:44.773308 systemd-logind[1694]: New session 19 of user core. Apr 30 00:39:44.781465 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:39:45.271503 sshd[6259]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:45.275748 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:39:45.276387 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:46352.service: Deactivated successfully. Apr 30 00:39:45.279074 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:39:45.280240 systemd-logind[1694]: Removed session 19. Apr 30 00:39:45.360568 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:46360.service - OpenSSH per-connection server daemon (10.200.16.10:46360). Apr 30 00:39:45.802821 sshd[6285]: Accepted publickey for core from 10.200.16.10 port 46360 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:45.804427 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:45.808478 systemd-logind[1694]: New session 20 of user core. Apr 30 00:39:45.818452 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:39:47.028853 sshd[6285]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:47.032907 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:46360.service: Deactivated successfully. Apr 30 00:39:47.032947 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:39:47.035210 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:39:47.038118 systemd-logind[1694]: Removed session 20. Apr 30 00:39:47.125565 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:46362.service - OpenSSH per-connection server daemon (10.200.16.10:46362). Apr 30 00:39:47.577232 sshd[6306]: Accepted publickey for core from 10.200.16.10 port 46362 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:47.579079 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:47.585339 systemd-logind[1694]: New session 21 of user core. Apr 30 00:39:47.590560 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:39:48.081614 sshd[6306]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:48.085475 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:46362.service: Deactivated successfully. Apr 30 00:39:48.087867 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:39:48.089911 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:39:48.091050 systemd-logind[1694]: Removed session 21. Apr 30 00:39:48.167696 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:46378.service - OpenSSH per-connection server daemon (10.200.16.10:46378). Apr 30 00:39:48.647463 sshd[6320]: Accepted publickey for core from 10.200.16.10 port 46378 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:48.648865 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:48.653955 systemd-logind[1694]: New session 22 of user core. Apr 30 00:39:48.657430 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:39:49.052127 sshd[6320]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:49.055764 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:46378.service: Deactivated successfully. Apr 30 00:39:49.057656 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:39:49.058836 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:39:49.059689 systemd-logind[1694]: Removed session 22. Apr 30 00:39:54.145575 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:34720.service - OpenSSH per-connection server daemon (10.200.16.10:34720). Apr 30 00:39:54.626116 sshd[6341]: Accepted publickey for core from 10.200.16.10 port 34720 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:39:54.627709 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:39:54.632422 systemd-logind[1694]: New session 23 of user core. Apr 30 00:39:54.640480 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:39:55.038539 sshd[6341]: pam_unix(sshd:session): session closed for user core Apr 30 00:39:55.042253 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:34720.service: Deactivated successfully. Apr 30 00:39:55.044041 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:39:55.044896 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:39:55.045783 systemd-logind[1694]: Removed session 23. Apr 30 00:40:00.125435 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:34290.service - OpenSSH per-connection server daemon (10.200.16.10:34290). Apr 30 00:40:00.573431 sshd[6355]: Accepted publickey for core from 10.200.16.10 port 34290 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:00.574961 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:00.578982 systemd-logind[1694]: New session 24 of user core. Apr 30 00:40:00.584441 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:40:00.963552 sshd[6355]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:00.967071 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:40:00.967805 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:34290.service: Deactivated successfully. Apr 30 00:40:00.969933 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:40:00.971170 systemd-logind[1694]: Removed session 24. Apr 30 00:40:06.055569 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:34294.service - OpenSSH per-connection server daemon (10.200.16.10:34294). Apr 30 00:40:06.533643 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 34294 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:06.535684 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:06.539856 systemd-logind[1694]: New session 25 of user core. Apr 30 00:40:06.544499 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:40:06.939518 sshd[6392]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:06.942925 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:34294.service: Deactivated successfully. Apr 30 00:40:06.944846 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:40:06.945559 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:40:06.946674 systemd-logind[1694]: Removed session 25. Apr 30 00:40:12.019932 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:46950.service - OpenSSH per-connection server daemon (10.200.16.10:46950). Apr 30 00:40:12.468185 sshd[6430]: Accepted publickey for core from 10.200.16.10 port 46950 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:12.469611 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:12.474031 systemd-logind[1694]: New session 26 of user core. Apr 30 00:40:12.479468 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:40:12.865501 sshd[6430]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:12.868256 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:40:12.868894 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:46950.service: Deactivated successfully. Apr 30 00:40:12.871781 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:40:12.874762 systemd-logind[1694]: Removed session 26. Apr 30 00:40:17.953556 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:46954.service - OpenSSH per-connection server daemon (10.200.16.10:46954). Apr 30 00:40:18.400757 sshd[6466]: Accepted publickey for core from 10.200.16.10 port 46954 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:18.402718 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:18.410247 systemd-logind[1694]: New session 27 of user core. Apr 30 00:40:18.416483 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:40:18.788322 sshd[6466]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:18.794005 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:46954.service: Deactivated successfully. Apr 30 00:40:18.796429 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:40:18.798051 systemd-logind[1694]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:40:18.799605 systemd-logind[1694]: Removed session 27. Apr 30 00:40:23.886558 systemd[1]: Started sshd@25-10.200.20.34:22-10.200.16.10:44150.service - OpenSSH per-connection server daemon (10.200.16.10:44150). Apr 30 00:40:24.368118 sshd[6493]: Accepted publickey for core from 10.200.16.10 port 44150 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:24.369578 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:24.374928 systemd-logind[1694]: New session 28 of user core. Apr 30 00:40:24.379445 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:40:24.777964 sshd[6493]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:24.781448 systemd[1]: sshd@25-10.200.20.34:22-10.200.16.10:44150.service: Deactivated successfully. Apr 30 00:40:24.783080 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:40:24.784591 systemd-logind[1694]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:40:24.785896 systemd-logind[1694]: Removed session 28.