Apr 30 00:36:13.333656 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:36:13.333678 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:36:13.333686 kernel: KASLR enabled Apr 30 00:36:13.333692 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:36:13.333699 kernel: printk: bootconsole [pl11] enabled Apr 30 00:36:13.333704 kernel: efi: EFI v2.7 by EDK II Apr 30 00:36:13.333711 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:36:13.333717 kernel: random: crng init done Apr 30 00:36:13.333723 kernel: ACPI: Early table checksum verification disabled Apr 30 00:36:13.333729 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:36:13.333735 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333741 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333748 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:36:13.333755 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333762 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333769 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333775 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333783 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333789 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333795 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:36:13.333802 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333808 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:36:13.333814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:36:13.333820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:36:13.333827 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:36:13.333833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:36:13.333839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:36:13.333846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:36:13.333853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:36:13.333859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:36:13.333866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:36:13.333872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:36:13.333878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:36:13.333885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:36:13.333891 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 30 00:36:13.333897 kernel: Zone ranges: Apr 30 00:36:13.333903 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:36:13.333909 kernel: DMA32 empty Apr 30 00:36:13.333916 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:13.333923 kernel: Movable zone start for each node Apr 30 00:36:13.333933 kernel: Early memory node ranges Apr 30 00:36:13.333940 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:36:13.333947 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:36:13.333954 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:36:13.333960 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:36:13.333981 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:36:13.333988 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:36:13.333995 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:13.334002 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:36:13.334009 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:36:13.334016 kernel: psci: probing for conduit method from ACPI. Apr 30 00:36:13.334022 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:36:13.334029 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:36:13.334036 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:36:13.334043 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:36:13.334049 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:36:13.334056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:36:13.334065 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:36:13.334071 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:36:13.334078 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:36:13.334085 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:36:13.334092 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:36:13.334099 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:36:13.334105 kernel: CPU features: detected: Spectre-BHB Apr 30 00:36:13.334112 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:36:13.334119 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:36:13.334126 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:36:13.334132 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:36:13.334140 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:36:13.334147 kernel: alternatives: applying boot alternatives Apr 30 00:36:13.334155 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:13.334162 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:36:13.336200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:36:13.336214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:36:13.336221 kernel: Fallback order for Node 0: 0 Apr 30 00:36:13.336228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:36:13.336235 kernel: Policy zone: Normal Apr 30 00:36:13.336242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:36:13.336248 kernel: software IO TLB: area num 2. Apr 30 00:36:13.336261 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:36:13.336268 kernel: Memory: 3982692K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211468K reserved, 0K cma-reserved) Apr 30 00:36:13.336275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:36:13.336282 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:36:13.336289 kernel: rcu: RCU event tracing is enabled. Apr 30 00:36:13.336296 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:36:13.336303 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:36:13.336310 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:36:13.336317 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:36:13.336324 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:36:13.336330 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:36:13.336338 kernel: GICv3: 960 SPIs implemented Apr 30 00:36:13.336345 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:36:13.336351 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:36:13.336358 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:36:13.336365 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:36:13.336371 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:36:13.336378 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:36:13.336385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:13.336392 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:36:13.336398 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:36:13.336405 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:36:13.336414 kernel: Console: colour dummy device 80x25 Apr 30 00:36:13.336421 kernel: printk: console [tty1] enabled Apr 30 00:36:13.336427 kernel: ACPI: Core revision 20230628 Apr 30 00:36:13.336434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:36:13.336441 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:36:13.336448 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:36:13.336455 kernel: landlock: Up and running. Apr 30 00:36:13.336462 kernel: SELinux: Initializing. Apr 30 00:36:13.336469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.336476 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.336485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:13.336492 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:13.336499 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:36:13.336506 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:36:13.336513 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:36:13.336520 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:36:13.336527 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:36:13.336540 kernel: Remapping and enabling EFI services. Apr 30 00:36:13.336547 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:36:13.336555 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:36:13.336562 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:36:13.336571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:13.336578 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:36:13.336585 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:36:13.336592 kernel: SMP: Total of 2 processors activated. Apr 30 00:36:13.336599 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:36:13.336609 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:36:13.336616 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:36:13.336623 kernel: CPU features: detected: CRC32 instructions Apr 30 00:36:13.336631 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:36:13.336638 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:36:13.336645 kernel: CPU features: detected: Privileged Access Never Apr 30 00:36:13.336652 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:36:13.336659 kernel: alternatives: applying system-wide alternatives Apr 30 00:36:13.336667 kernel: devtmpfs: initialized Apr 30 00:36:13.336675 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:36:13.336683 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:36:13.336690 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:36:13.336697 kernel: SMBIOS 3.1.0 present. Apr 30 00:36:13.336705 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:36:13.336712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:36:13.336719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:36:13.336727 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:36:13.336734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:36:13.336743 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:36:13.336750 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:36:13.336758 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:36:13.336765 kernel: cpuidle: using governor menu Apr 30 00:36:13.336772 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:36:13.336779 kernel: ASID allocator initialised with 32768 entries Apr 30 00:36:13.336787 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:36:13.336794 kernel: Serial: AMBA PL011 UART driver Apr 30 00:36:13.336801 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:36:13.336810 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:36:13.336817 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:36:13.336824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:36:13.336831 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:36:13.336839 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:36:13.336846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:36:13.336853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:36:13.336860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:36:13.336867 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:36:13.336876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:36:13.336883 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:36:13.336890 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:36:13.336897 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:36:13.336905 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:36:13.336912 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:36:13.336919 kernel: ACPI: Interpreter enabled Apr 30 00:36:13.336926 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:36:13.336933 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:36:13.336942 kernel: printk: console [ttyAMA0] enabled Apr 30 00:36:13.336949 kernel: printk: bootconsole [pl11] disabled Apr 30 00:36:13.336957 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:36:13.336964 kernel: iommu: Default domain type: Translated Apr 30 00:36:13.336971 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:36:13.336978 kernel: efivars: Registered efivars operations Apr 30 00:36:13.336985 kernel: vgaarb: loaded Apr 30 00:36:13.336992 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:36:13.336999 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:36:13.337007 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:36:13.337015 kernel: pnp: PnP ACPI init Apr 30 00:36:13.337022 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:36:13.337029 kernel: NET: Registered PF_INET protocol family Apr 30 00:36:13.337036 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:36:13.337043 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:36:13.337051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:36:13.337058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:36:13.337065 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:36:13.337074 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:36:13.337081 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.337088 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.337095 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:36:13.337102 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:36:13.337110 kernel: kvm [1]: HYP mode not available Apr 30 00:36:13.337117 kernel: Initialise system trusted keyrings Apr 30 00:36:13.337124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:36:13.337131 kernel: Key type asymmetric registered Apr 30 00:36:13.337140 kernel: Asymmetric key parser 'x509' registered Apr 30 00:36:13.337147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:36:13.337154 kernel: io scheduler mq-deadline registered Apr 30 00:36:13.337161 kernel: io scheduler kyber registered Apr 30 00:36:13.337178 kernel: io scheduler bfq registered Apr 30 00:36:13.337187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:36:13.337195 kernel: thunder_xcv, ver 1.0 Apr 30 00:36:13.337202 kernel: thunder_bgx, ver 1.0 Apr 30 00:36:13.337209 kernel: nicpf, ver 1.0 Apr 30 00:36:13.337216 kernel: nicvf, ver 1.0 Apr 30 00:36:13.337354 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:36:13.337426 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:36:12 UTC (1745973372) Apr 30 00:36:13.337436 kernel: efifb: probing for efifb Apr 30 00:36:13.337444 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:36:13.337451 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:36:13.337458 kernel: efifb: scrolling: redraw Apr 30 00:36:13.337465 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:36:13.337474 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:36:13.337482 kernel: fb0: EFI VGA frame buffer device Apr 30 00:36:13.337489 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:36:13.337496 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:36:13.337503 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:36:13.337510 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:36:13.337518 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:36:13.337525 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:36:13.337532 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:36:13.337541 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:36:13.337548 kernel: Segment Routing with IPv6 Apr 30 00:36:13.337555 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:36:13.337563 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:36:13.337570 kernel: Key type dns_resolver registered Apr 30 00:36:13.337577 kernel: registered taskstats version 1 Apr 30 00:36:13.337584 kernel: Loading compiled-in X.509 certificates Apr 30 00:36:13.337592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:36:13.337599 kernel: Key type .fscrypt registered Apr 30 00:36:13.337607 kernel: Key type fscrypt-provisioning registered Apr 30 00:36:13.337615 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:36:13.337622 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:36:13.337629 kernel: ima: No architecture policies found Apr 30 00:36:13.337636 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:36:13.337644 kernel: clk: Disabling unused clocks Apr 30 00:36:13.337651 kernel: Freeing unused kernel memory: 39424K Apr 30 00:36:13.337658 kernel: Run /init as init process Apr 30 00:36:13.337665 kernel: with arguments: Apr 30 00:36:13.337674 kernel: /init Apr 30 00:36:13.337681 kernel: with environment: Apr 30 00:36:13.337688 kernel: HOME=/ Apr 30 00:36:13.337695 kernel: TERM=linux Apr 30 00:36:13.337702 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:36:13.337711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:13.337721 systemd[1]: Detected virtualization microsoft. Apr 30 00:36:13.337728 systemd[1]: Detected architecture arm64. Apr 30 00:36:13.337737 systemd[1]: Running in initrd. Apr 30 00:36:13.337745 systemd[1]: No hostname configured, using default hostname. Apr 30 00:36:13.337752 systemd[1]: Hostname set to . Apr 30 00:36:13.337760 systemd[1]: Initializing machine ID from random generator. Apr 30 00:36:13.337768 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:36:13.337776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:13.337783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:13.337792 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:36:13.337801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:13.337809 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:36:13.337817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:36:13.337826 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:36:13.337834 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:36:13.337842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:13.337850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:13.337859 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:13.337867 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:13.337875 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:13.337882 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:13.337890 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:13.337898 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:13.337906 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:13.337914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:13.337923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:13.337931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:13.337939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:13.337946 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:13.337954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:36:13.337962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:13.337970 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:36:13.337977 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:36:13.337985 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:13.337994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:13.338018 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:36:13.338037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:13.338045 systemd-journald[217]: Journal started Apr 30 00:36:13.338065 systemd-journald[217]: Runtime Journal (/run/log/journal/a3fbdc892af64d7cb292c414e5661a42) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:36:13.349890 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:36:13.365993 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:13.370086 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:13.399275 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:36:13.399299 kernel: Bridge firewalling registered Apr 30 00:36:13.384457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:13.397945 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:36:13.406440 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:36:13.417166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:13.428464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:13.453556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:13.462313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:13.482555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:13.500322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:13.514258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:13.528599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:13.534836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:13.549081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:13.573518 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:36:13.587058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:13.595823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:13.618257 dracut-cmdline[252]: dracut-dracut-053 Apr 30 00:36:13.623476 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:13.664918 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:13.670136 systemd-resolved[256]: Positive Trust Anchors: Apr 30 00:36:13.670145 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:13.670196 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:13.672316 systemd-resolved[256]: Defaulting to hostname 'linux'. Apr 30 00:36:13.674744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:13.687263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:13.810194 kernel: SCSI subsystem initialized Apr 30 00:36:13.817184 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:36:13.829196 kernel: iscsi: registered transport (tcp) Apr 30 00:36:13.846735 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:36:13.846770 kernel: QLogic iSCSI HBA Driver Apr 30 00:36:13.884997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:13.899441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:36:13.928152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:36:13.928198 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:36:13.934606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:36:13.982195 kernel: raid6: neonx8 gen() 15763 MB/s Apr 30 00:36:14.002181 kernel: raid6: neonx4 gen() 15665 MB/s Apr 30 00:36:14.022182 kernel: raid6: neonx2 gen() 13243 MB/s Apr 30 00:36:14.043184 kernel: raid6: neonx1 gen() 10480 MB/s Apr 30 00:36:14.063177 kernel: raid6: int64x8 gen() 6960 MB/s Apr 30 00:36:14.083177 kernel: raid6: int64x4 gen() 7349 MB/s Apr 30 00:36:14.104178 kernel: raid6: int64x2 gen() 6133 MB/s Apr 30 00:36:14.127900 kernel: raid6: int64x1 gen() 5061 MB/s Apr 30 00:36:14.127922 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Apr 30 00:36:14.151330 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Apr 30 00:36:14.151358 kernel: raid6: using neon recovery algorithm Apr 30 00:36:14.163621 kernel: xor: measuring software checksum speed Apr 30 00:36:14.163640 kernel: 8regs : 19797 MB/sec Apr 30 00:36:14.167164 kernel: 32regs : 19622 MB/sec Apr 30 00:36:14.170610 kernel: arm64_neon : 27070 MB/sec Apr 30 00:36:14.174865 kernel: xor: using function: arm64_neon (27070 MB/sec) Apr 30 00:36:14.225188 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:36:14.234425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:14.250288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:14.273268 systemd-udevd[438]: Using default interface naming scheme 'v255'. Apr 30 00:36:14.278493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:14.298298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:36:14.313032 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 30 00:36:14.338803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:14.361457 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:14.401271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:14.427394 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:36:14.448790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:14.462329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:14.476917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:14.490608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:14.508189 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:36:14.511315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:36:14.547499 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:36:14.547526 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:36:14.547536 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:36:14.547545 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:36:14.548006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:14.616265 kernel: scsi host0: storvsc_host_t Apr 30 00:36:14.616427 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:36:14.616535 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:36:14.616546 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:36:14.616643 kernel: scsi host1: storvsc_host_t Apr 30 00:36:14.616732 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:36:14.616742 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 00:36:14.616751 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 00:36:14.610999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:14.630587 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: VF slot 1 added Apr 30 00:36:14.645229 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:36:14.611188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:14.630651 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:14.680731 kernel: PTP clock support registered Apr 30 00:36:14.680757 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:36:14.644626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:14.705646 kernel: hv_pci 091b6e65-f14b-4356-8e2c-c9045a5665c1: PCI VMBus probing: Using version 0x10004 Apr 30 00:36:14.607590 kernel: hv_pci 091b6e65-f14b-4356-8e2c-c9045a5665c1: PCI host bridge to bus f14b:00 Apr 30 00:36:14.613664 kernel: pci_bus f14b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:36:14.613792 kernel: pci_bus f14b:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:36:14.613870 kernel: pci f14b:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:36:14.613968 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:36:14.613976 kernel: pci f14b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:14.614061 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:36:14.614072 kernel: pci f14b:00:02.0: enabling Extended Tags Apr 30 00:36:14.615334 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:36:14.615456 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:36:14.615465 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:36:14.615473 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:36:14.615481 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:36:14.615567 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:36:14.615579 kernel: pci f14b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f14b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:36:14.615666 kernel: pci_bus f14b:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:36:14.615744 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:36:14.639552 kernel: pci f14b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:14.642284 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:36:14.642390 systemd-journald[217]: Time jumped backwards, rotating. Apr 30 00:36:14.642433 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:36:14.642521 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:36:14.642606 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:36:14.642688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:14.642697 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:36:14.645033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:14.657914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:14.715781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:14.740785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:14.773904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:14.568009 systemd-resolved[256]: Clock change detected. Flushing caches. Apr 30 00:36:14.623623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:14.700843 kernel: mlx5_core f14b:00:02.0: enabling device (0000 -> 0002) Apr 30 00:36:14.919387 kernel: mlx5_core f14b:00:02.0: firmware version: 16.30.1284 Apr 30 00:36:14.919555 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: VF registering: eth1 Apr 30 00:36:14.919664 kernel: mlx5_core f14b:00:02.0 eth1: joined to eth0 Apr 30 00:36:14.919770 kernel: mlx5_core f14b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:36:14.927173 kernel: mlx5_core f14b:00:02.0 enP61771s1: renamed from eth1 Apr 30 00:36:15.147069 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:36:15.270179 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (497) Apr 30 00:36:15.283406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:36:15.303124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:36:15.335181 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (491) Apr 30 00:36:15.347928 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:36:15.354865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:36:15.392336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:36:16.423094 disk-uuid[597]: The operation has completed successfully. Apr 30 00:36:16.428256 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:16.481671 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:36:16.483175 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:36:16.511276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:36:16.524148 sh[713]: Success Apr 30 00:36:16.560238 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:36:16.806248 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:36:16.822271 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:36:16.829479 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:36:16.860099 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:36:16.860180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:16.867077 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:36:16.872176 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:36:16.876549 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:36:17.179735 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:36:17.185337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:36:17.206429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:36:17.218480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:36:17.242204 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:17.242239 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:17.253182 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:17.283104 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:17.290949 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:36:17.304719 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:17.315198 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:36:17.332339 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:36:17.346811 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:17.369373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:17.395585 systemd-networkd[897]: lo: Link UP Apr 30 00:36:17.395596 systemd-networkd[897]: lo: Gained carrier Apr 30 00:36:17.397171 systemd-networkd[897]: Enumeration completed Apr 30 00:36:17.397765 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:17.397768 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:17.399377 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:17.409105 systemd[1]: Reached target network.target - Network. Apr 30 00:36:17.468171 kernel: mlx5_core f14b:00:02.0 enP61771s1: Link up Apr 30 00:36:17.509171 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: Data path switched to VF: enP61771s1 Apr 30 00:36:17.509598 systemd-networkd[897]: enP61771s1: Link UP Apr 30 00:36:17.509684 systemd-networkd[897]: eth0: Link UP Apr 30 00:36:17.509801 systemd-networkd[897]: eth0: Gained carrier Apr 30 00:36:17.509809 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:17.534447 systemd-networkd[897]: enP61771s1: Gained carrier Apr 30 00:36:17.547201 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:18.343602 ignition[874]: Ignition 2.19.0 Apr 30 00:36:18.343615 ignition[874]: Stage: fetch-offline Apr 30 00:36:18.345633 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:18.343648 ignition[874]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.366288 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:36:18.343656 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.343757 ignition[874]: parsed url from cmdline: "" Apr 30 00:36:18.343760 ignition[874]: no config URL provided Apr 30 00:36:18.343764 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:18.343771 ignition[874]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:18.343776 ignition[874]: failed to fetch config: resource requires networking Apr 30 00:36:18.343944 ignition[874]: Ignition finished successfully Apr 30 00:36:18.390440 ignition[907]: Ignition 2.19.0 Apr 30 00:36:18.390447 ignition[907]: Stage: fetch Apr 30 00:36:18.390594 ignition[907]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.390603 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.390702 ignition[907]: parsed url from cmdline: "" Apr 30 00:36:18.390708 ignition[907]: no config URL provided Apr 30 00:36:18.390712 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:18.390719 ignition[907]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:18.390738 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:36:18.472038 ignition[907]: GET result: OK Apr 30 00:36:18.472126 ignition[907]: config has been read from IMDS userdata Apr 30 00:36:18.472204 ignition[907]: parsing config with SHA512: 4ee0bb0e991b78c2ca35711a882532924fdaacbd96bcf72a1a8c962a82259e5a818a903af47822f3c8b54447c0c55c4346311a8302c9f2b15a6857f10e4ac02a Apr 30 00:36:18.475707 unknown[907]: fetched base config from "system" Apr 30 00:36:18.476076 ignition[907]: fetch: fetch complete Apr 30 00:36:18.475714 unknown[907]: fetched base config from "system" Apr 30 00:36:18.476081 ignition[907]: fetch: fetch passed Apr 30 00:36:18.475719 unknown[907]: fetched user config from "azure" Apr 30 00:36:18.476118 ignition[907]: Ignition finished successfully Apr 30 00:36:18.485058 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:36:18.514315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:36:18.530022 ignition[913]: Ignition 2.19.0 Apr 30 00:36:18.530032 ignition[913]: Stage: kargs Apr 30 00:36:18.536424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:36:18.530255 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.530265 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.531132 ignition[913]: kargs: kargs passed Apr 30 00:36:18.553426 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:36:18.531191 ignition[913]: Ignition finished successfully Apr 30 00:36:18.575689 ignition[920]: Ignition 2.19.0 Apr 30 00:36:18.579433 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:36:18.575697 ignition[920]: Stage: disks Apr 30 00:36:18.586770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:18.575899 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.595878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:18.575908 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.608021 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:18.577489 ignition[920]: disks: disks passed Apr 30 00:36:18.616381 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:18.577543 ignition[920]: Ignition finished successfully Apr 30 00:36:18.627842 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:18.655388 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:36:18.678276 systemd-networkd[897]: eth0: Gained IPv6LL Apr 30 00:36:18.783095 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:36:18.794226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:36:18.815343 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:36:18.867394 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:36:18.867825 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:36:18.872883 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:18.922259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:18.932690 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:36:18.942324 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:36:18.959023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:36:19.003100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Apr 30 00:36:19.003124 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:19.003133 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:19.003143 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:18.959061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:18.980063 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:36:19.020410 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:36:19.037523 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:19.032890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:19.380283 systemd-networkd[897]: enP61771s1: Gained IPv6LL Apr 30 00:36:19.577297 coreos-metadata[941]: Apr 30 00:36:19.577 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:36:19.585842 coreos-metadata[941]: Apr 30 00:36:19.581 INFO Fetch successful Apr 30 00:36:19.585842 coreos-metadata[941]: Apr 30 00:36:19.581 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:36:19.601986 coreos-metadata[941]: Apr 30 00:36:19.592 INFO Fetch successful Apr 30 00:36:19.610555 coreos-metadata[941]: Apr 30 00:36:19.610 INFO wrote hostname ci-4081.3.3-a-8ba35441fd to /sysroot/etc/hostname Apr 30 00:36:19.619714 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:19.818141 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:36:19.884332 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:36:19.907138 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:36:19.916763 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:36:20.828032 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:20.845405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:36:20.861140 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:36:20.877388 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:20.872115 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:36:20.890568 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:36:20.913311 ignition[1058]: INFO : Ignition 2.19.0 Apr 30 00:36:20.913311 ignition[1058]: INFO : Stage: mount Apr 30 00:36:20.922319 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:20.922319 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:20.922319 ignition[1058]: INFO : mount: mount passed Apr 30 00:36:20.922319 ignition[1058]: INFO : Ignition finished successfully Apr 30 00:36:20.918922 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:36:20.944045 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:36:20.953403 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:20.997713 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1068) Apr 30 00:36:20.997772 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:21.003787 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:21.008146 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:21.015185 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:21.016137 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:21.043828 ignition[1085]: INFO : Ignition 2.19.0 Apr 30 00:36:21.043828 ignition[1085]: INFO : Stage: files Apr 30 00:36:21.043828 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:21.043828 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:21.043828 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:36:21.086150 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:36:21.086150 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:36:21.226673 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:36:21.234358 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:36:21.234358 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:36:21.227118 unknown[1085]: wrote ssh authorized keys file for user: core Apr 30 00:36:21.274473 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:21.285496 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:36:21.354726 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:36:21.482969 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:36:21.962894 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:36:22.180527 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:22.180527 ignition[1085]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:36:22.235226 ignition[1085]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: files passed Apr 30 00:36:22.247123 ignition[1085]: INFO : Ignition finished successfully Apr 30 00:36:22.247623 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:36:22.285585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:36:22.300346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:36:22.369095 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.369095 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.325915 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:36:22.397406 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.326001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:36:22.354538 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:22.362430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:36:22.398369 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:36:22.434486 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:36:22.434622 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:36:22.447633 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:36:22.458001 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:36:22.471042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:36:22.490396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:36:22.511035 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:22.526424 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:36:22.544428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:22.551410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:22.564247 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:36:22.575729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:36:22.575894 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:22.592174 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:36:22.598581 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:36:22.610360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:36:22.621728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:22.632772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:22.644638 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:36:22.656425 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:22.669735 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:36:22.680739 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:36:22.693113 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:36:22.703021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:36:22.703207 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:22.718432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:22.729563 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:22.741785 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:36:22.747259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:22.754498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:36:22.754668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:22.771834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:36:22.771998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:22.786623 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:36:22.786780 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:36:22.797402 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:36:22.797555 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:22.835269 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:36:22.864096 ignition[1137]: INFO : Ignition 2.19.0 Apr 30 00:36:22.864096 ignition[1137]: INFO : Stage: umount Apr 30 00:36:22.864096 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:22.864096 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:22.864096 ignition[1137]: INFO : umount: umount passed Apr 30 00:36:22.864096 ignition[1137]: INFO : Ignition finished successfully Apr 30 00:36:22.859120 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:36:22.871907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:36:22.872061 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:22.884190 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:36:22.884309 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:22.898627 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:36:22.898730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:36:22.918504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:36:22.919031 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:36:22.919128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:36:22.926849 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:36:22.926901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:36:22.937619 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:36:22.937670 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:36:22.948788 systemd[1]: Stopped target network.target - Network. Apr 30 00:36:22.960111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:36:22.960184 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:22.975261 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:36:22.987514 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:36:23.002940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:23.009821 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:36:23.020470 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:36:23.025725 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:36:23.025776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:23.037087 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:36:23.037132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:23.047879 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:36:23.047927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:36:23.058851 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:36:23.058889 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:23.071776 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:36:23.082856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:36:23.087200 systemd-networkd[897]: eth0: DHCPv6 lease lost Apr 30 00:36:23.100733 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:36:23.100825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:36:23.107327 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:36:23.107395 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:36:23.329361 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: Data path switched from VF: enP61771s1 Apr 30 00:36:23.123886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:36:23.123937 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:23.150341 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:36:23.160802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:36:23.160890 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:23.173844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:23.193867 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:36:23.193969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:36:23.212816 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:36:23.215067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:23.238345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:36:23.238426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:23.249970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:36:23.250018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:23.263052 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:36:23.263110 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:23.284782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:36:23.284848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:23.301115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:23.301208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:23.342379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:36:23.356207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:36:23.356272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:23.368292 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:36:23.368343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:23.382080 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:36:23.382124 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:23.396954 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:36:23.396995 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:23.410066 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:36:23.410110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:23.422772 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:36:23.422814 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:23.430737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:23.430784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:23.443539 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:36:23.443646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:36:23.457852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:36:23.457938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:36:23.528721 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:36:23.528864 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:36:23.537746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:36:23.548045 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:36:23.548109 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:23.577372 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:36:23.597648 systemd[1]: Switching root. Apr 30 00:36:23.766780 systemd-journald[217]: Journal stopped Apr 30 00:36:13.333656 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:36:13.333678 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:36:13.333686 kernel: KASLR enabled Apr 30 00:36:13.333692 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 30 00:36:13.333699 kernel: printk: bootconsole [pl11] enabled Apr 30 00:36:13.333704 kernel: efi: EFI v2.7 by EDK II Apr 30 00:36:13.333711 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 30 00:36:13.333717 kernel: random: crng init done Apr 30 00:36:13.333723 kernel: ACPI: Early table checksum verification disabled Apr 30 00:36:13.333729 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 30 00:36:13.333735 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333741 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333748 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 30 00:36:13.333755 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333762 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333769 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333775 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333783 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333789 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333795 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 30 00:36:13.333802 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 00:36:13.333808 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 30 00:36:13.333814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 30 00:36:13.333820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 30 00:36:13.333827 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 30 00:36:13.333833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 30 00:36:13.333839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 30 00:36:13.333846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 30 00:36:13.333853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 30 00:36:13.333859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 30 00:36:13.333866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 30 00:36:13.333872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 30 00:36:13.333878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 30 00:36:13.333885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 30 00:36:13.333891 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 30 00:36:13.333897 kernel: Zone ranges: Apr 30 00:36:13.333903 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 30 00:36:13.333909 kernel: DMA32 empty Apr 30 00:36:13.333916 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:13.333923 kernel: Movable zone start for each node Apr 30 00:36:13.333933 kernel: Early memory node ranges Apr 30 00:36:13.333940 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 30 00:36:13.333947 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 30 00:36:13.333954 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 30 00:36:13.333960 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 30 00:36:13.333981 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 30 00:36:13.333988 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 30 00:36:13.333995 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 30 00:36:13.334002 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 30 00:36:13.334009 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 30 00:36:13.334016 kernel: psci: probing for conduit method from ACPI. Apr 30 00:36:13.334022 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:36:13.334029 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:36:13.334036 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 30 00:36:13.334043 kernel: psci: SMC Calling Convention v1.4 Apr 30 00:36:13.334049 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 30 00:36:13.334056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 30 00:36:13.334065 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:36:13.334071 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:36:13.334078 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:36:13.334085 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:36:13.334092 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:36:13.334099 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:36:13.334105 kernel: CPU features: detected: Spectre-BHB Apr 30 00:36:13.334112 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:36:13.334119 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:36:13.334126 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:36:13.334132 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 30 00:36:13.334140 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:36:13.334147 kernel: alternatives: applying boot alternatives Apr 30 00:36:13.334155 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:13.334162 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:36:13.336200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:36:13.336214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:36:13.336221 kernel: Fallback order for Node 0: 0 Apr 30 00:36:13.336228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 30 00:36:13.336235 kernel: Policy zone: Normal Apr 30 00:36:13.336242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:36:13.336248 kernel: software IO TLB: area num 2. Apr 30 00:36:13.336261 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 30 00:36:13.336268 kernel: Memory: 3982692K/4194160K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 211468K reserved, 0K cma-reserved) Apr 30 00:36:13.336275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:36:13.336282 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:36:13.336289 kernel: rcu: RCU event tracing is enabled. Apr 30 00:36:13.336296 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:36:13.336303 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:36:13.336310 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:36:13.336317 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:36:13.336324 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:36:13.336330 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:36:13.336338 kernel: GICv3: 960 SPIs implemented Apr 30 00:36:13.336345 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:36:13.336351 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:36:13.336358 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:36:13.336365 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 30 00:36:13.336371 kernel: ITS: No ITS available, not enabling LPIs Apr 30 00:36:13.336378 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:36:13.336385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:13.336392 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:36:13.336398 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:36:13.336405 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:36:13.336414 kernel: Console: colour dummy device 80x25 Apr 30 00:36:13.336421 kernel: printk: console [tty1] enabled Apr 30 00:36:13.336427 kernel: ACPI: Core revision 20230628 Apr 30 00:36:13.336434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:36:13.336441 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:36:13.336448 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:36:13.336455 kernel: landlock: Up and running. Apr 30 00:36:13.336462 kernel: SELinux: Initializing. Apr 30 00:36:13.336469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.336476 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.336485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:13.336492 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:13.336499 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 30 00:36:13.336506 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Apr 30 00:36:13.336513 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 00:36:13.336520 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:36:13.336527 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:36:13.336540 kernel: Remapping and enabling EFI services. Apr 30 00:36:13.336547 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:36:13.336555 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:36:13.336562 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 30 00:36:13.336571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:36:13.336578 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:36:13.336585 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:36:13.336592 kernel: SMP: Total of 2 processors activated. Apr 30 00:36:13.336599 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:36:13.336609 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 30 00:36:13.336616 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:36:13.336623 kernel: CPU features: detected: CRC32 instructions Apr 30 00:36:13.336631 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:36:13.336638 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:36:13.336645 kernel: CPU features: detected: Privileged Access Never Apr 30 00:36:13.336652 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:36:13.336659 kernel: alternatives: applying system-wide alternatives Apr 30 00:36:13.336667 kernel: devtmpfs: initialized Apr 30 00:36:13.336675 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:36:13.336683 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:36:13.336690 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:36:13.336697 kernel: SMBIOS 3.1.0 present. Apr 30 00:36:13.336705 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Apr 30 00:36:13.336712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:36:13.336719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:36:13.336727 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:36:13.336734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:36:13.336743 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:36:13.336750 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 30 00:36:13.336758 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:36:13.336765 kernel: cpuidle: using governor menu Apr 30 00:36:13.336772 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:36:13.336779 kernel: ASID allocator initialised with 32768 entries Apr 30 00:36:13.336787 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:36:13.336794 kernel: Serial: AMBA PL011 UART driver Apr 30 00:36:13.336801 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:36:13.336810 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:36:13.336817 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:36:13.336824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:36:13.336831 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:36:13.336839 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:36:13.336846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:36:13.336853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:36:13.336860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:36:13.336867 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:36:13.336876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:36:13.336883 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:36:13.336890 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:36:13.336897 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:36:13.336905 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:36:13.336912 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:36:13.336919 kernel: ACPI: Interpreter enabled Apr 30 00:36:13.336926 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:36:13.336933 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:36:13.336942 kernel: printk: console [ttyAMA0] enabled Apr 30 00:36:13.336949 kernel: printk: bootconsole [pl11] disabled Apr 30 00:36:13.336957 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 30 00:36:13.336964 kernel: iommu: Default domain type: Translated Apr 30 00:36:13.336971 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:36:13.336978 kernel: efivars: Registered efivars operations Apr 30 00:36:13.336985 kernel: vgaarb: loaded Apr 30 00:36:13.336992 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:36:13.336999 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:36:13.337007 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:36:13.337015 kernel: pnp: PnP ACPI init Apr 30 00:36:13.337022 kernel: pnp: PnP ACPI: found 0 devices Apr 30 00:36:13.337029 kernel: NET: Registered PF_INET protocol family Apr 30 00:36:13.337036 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:36:13.337043 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:36:13.337051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:36:13.337058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:36:13.337065 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:36:13.337074 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:36:13.337081 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.337088 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:36:13.337095 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:36:13.337102 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:36:13.337110 kernel: kvm [1]: HYP mode not available Apr 30 00:36:13.337117 kernel: Initialise system trusted keyrings Apr 30 00:36:13.337124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:36:13.337131 kernel: Key type asymmetric registered Apr 30 00:36:13.337140 kernel: Asymmetric key parser 'x509' registered Apr 30 00:36:13.337147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:36:13.337154 kernel: io scheduler mq-deadline registered Apr 30 00:36:13.337161 kernel: io scheduler kyber registered Apr 30 00:36:13.337178 kernel: io scheduler bfq registered Apr 30 00:36:13.337187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:36:13.337195 kernel: thunder_xcv, ver 1.0 Apr 30 00:36:13.337202 kernel: thunder_bgx, ver 1.0 Apr 30 00:36:13.337209 kernel: nicpf, ver 1.0 Apr 30 00:36:13.337216 kernel: nicvf, ver 1.0 Apr 30 00:36:13.337354 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:36:13.337426 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:36:12 UTC (1745973372) Apr 30 00:36:13.337436 kernel: efifb: probing for efifb Apr 30 00:36:13.337444 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 00:36:13.337451 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 00:36:13.337458 kernel: efifb: scrolling: redraw Apr 30 00:36:13.337465 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 00:36:13.337474 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:36:13.337482 kernel: fb0: EFI VGA frame buffer device Apr 30 00:36:13.337489 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 30 00:36:13.337496 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:36:13.337503 kernel: No ACPI PMU IRQ for CPU0 Apr 30 00:36:13.337510 kernel: No ACPI PMU IRQ for CPU1 Apr 30 00:36:13.337518 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 30 00:36:13.337525 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:36:13.337532 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:36:13.337541 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:36:13.337548 kernel: Segment Routing with IPv6 Apr 30 00:36:13.337555 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:36:13.337563 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:36:13.337570 kernel: Key type dns_resolver registered Apr 30 00:36:13.337577 kernel: registered taskstats version 1 Apr 30 00:36:13.337584 kernel: Loading compiled-in X.509 certificates Apr 30 00:36:13.337592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:36:13.337599 kernel: Key type .fscrypt registered Apr 30 00:36:13.337607 kernel: Key type fscrypt-provisioning registered Apr 30 00:36:13.337615 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:36:13.337622 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:36:13.337629 kernel: ima: No architecture policies found Apr 30 00:36:13.337636 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:36:13.337644 kernel: clk: Disabling unused clocks Apr 30 00:36:13.337651 kernel: Freeing unused kernel memory: 39424K Apr 30 00:36:13.337658 kernel: Run /init as init process Apr 30 00:36:13.337665 kernel: with arguments: Apr 30 00:36:13.337674 kernel: /init Apr 30 00:36:13.337681 kernel: with environment: Apr 30 00:36:13.337688 kernel: HOME=/ Apr 30 00:36:13.337695 kernel: TERM=linux Apr 30 00:36:13.337702 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:36:13.337711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:13.337721 systemd[1]: Detected virtualization microsoft. Apr 30 00:36:13.337728 systemd[1]: Detected architecture arm64. Apr 30 00:36:13.337737 systemd[1]: Running in initrd. Apr 30 00:36:13.337745 systemd[1]: No hostname configured, using default hostname. Apr 30 00:36:13.337752 systemd[1]: Hostname set to . Apr 30 00:36:13.337760 systemd[1]: Initializing machine ID from random generator. Apr 30 00:36:13.337768 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:36:13.337776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:13.337783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:13.337792 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:36:13.337801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:13.337809 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:36:13.337817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:36:13.337826 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:36:13.337834 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:36:13.337842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:13.337850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:13.337859 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:13.337867 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:13.337875 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:13.337882 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:13.337890 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:13.337898 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:13.337906 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:13.337914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:13.337923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:13.337931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:13.337939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:13.337946 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:13.337954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:36:13.337962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:13.337970 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:36:13.337977 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:36:13.337985 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:13.337994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:13.338018 systemd-journald[217]: Collecting audit messages is disabled. Apr 30 00:36:13.338037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:13.338045 systemd-journald[217]: Journal started Apr 30 00:36:13.338065 systemd-journald[217]: Runtime Journal (/run/log/journal/a3fbdc892af64d7cb292c414e5661a42) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:36:13.349890 systemd-modules-load[218]: Inserted module 'overlay' Apr 30 00:36:13.365993 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:13.370086 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:13.399275 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:36:13.399299 kernel: Bridge firewalling registered Apr 30 00:36:13.384457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:13.397945 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 30 00:36:13.406440 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:36:13.417166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:13.428464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:13.453556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:13.462313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:13.482555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:13.500322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:13.514258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:13.528599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:13.534836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:13.549081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:13.573518 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:36:13.587058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:13.595823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:13.618257 dracut-cmdline[252]: dracut-dracut-053 Apr 30 00:36:13.623476 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:36:13.664918 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:13.670136 systemd-resolved[256]: Positive Trust Anchors: Apr 30 00:36:13.670145 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:13.670196 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:13.672316 systemd-resolved[256]: Defaulting to hostname 'linux'. Apr 30 00:36:13.674744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:13.687263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:13.810194 kernel: SCSI subsystem initialized Apr 30 00:36:13.817184 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:36:13.829196 kernel: iscsi: registered transport (tcp) Apr 30 00:36:13.846735 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:36:13.846770 kernel: QLogic iSCSI HBA Driver Apr 30 00:36:13.884997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:13.899441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:36:13.928152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:36:13.928198 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:36:13.934606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:36:13.982195 kernel: raid6: neonx8 gen() 15763 MB/s Apr 30 00:36:14.002181 kernel: raid6: neonx4 gen() 15665 MB/s Apr 30 00:36:14.022182 kernel: raid6: neonx2 gen() 13243 MB/s Apr 30 00:36:14.043184 kernel: raid6: neonx1 gen() 10480 MB/s Apr 30 00:36:14.063177 kernel: raid6: int64x8 gen() 6960 MB/s Apr 30 00:36:14.083177 kernel: raid6: int64x4 gen() 7349 MB/s Apr 30 00:36:14.104178 kernel: raid6: int64x2 gen() 6133 MB/s Apr 30 00:36:14.127900 kernel: raid6: int64x1 gen() 5061 MB/s Apr 30 00:36:14.127922 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Apr 30 00:36:14.151330 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Apr 30 00:36:14.151358 kernel: raid6: using neon recovery algorithm Apr 30 00:36:14.163621 kernel: xor: measuring software checksum speed Apr 30 00:36:14.163640 kernel: 8regs : 19797 MB/sec Apr 30 00:36:14.167164 kernel: 32regs : 19622 MB/sec Apr 30 00:36:14.170610 kernel: arm64_neon : 27070 MB/sec Apr 30 00:36:14.174865 kernel: xor: using function: arm64_neon (27070 MB/sec) Apr 30 00:36:14.225188 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:36:14.234425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:14.250288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:14.273268 systemd-udevd[438]: Using default interface naming scheme 'v255'. Apr 30 00:36:14.278493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:14.298298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:36:14.313032 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 30 00:36:14.338803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:14.361457 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:14.401271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:14.427394 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:36:14.448790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:14.462329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:14.476917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:14.490608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:14.508189 kernel: hv_vmbus: Vmbus version:5.3 Apr 30 00:36:14.511315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:36:14.547499 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 00:36:14.547526 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 00:36:14.547536 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 00:36:14.547545 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 00:36:14.548006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:14.616265 kernel: scsi host0: storvsc_host_t Apr 30 00:36:14.616427 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 00:36:14.616535 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 00:36:14.616546 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 00:36:14.616643 kernel: scsi host1: storvsc_host_t Apr 30 00:36:14.616732 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 00:36:14.616742 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 30 00:36:14.616751 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 30 00:36:14.610999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:14.630587 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: VF slot 1 added Apr 30 00:36:14.645229 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 00:36:14.611188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:14.630651 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:14.680731 kernel: PTP clock support registered Apr 30 00:36:14.680757 kernel: hv_vmbus: registering driver hv_pci Apr 30 00:36:14.644626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:14.705646 kernel: hv_pci 091b6e65-f14b-4356-8e2c-c9045a5665c1: PCI VMBus probing: Using version 0x10004 Apr 30 00:36:14.607590 kernel: hv_pci 091b6e65-f14b-4356-8e2c-c9045a5665c1: PCI host bridge to bus f14b:00 Apr 30 00:36:14.613664 kernel: pci_bus f14b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 30 00:36:14.613792 kernel: pci_bus f14b:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 00:36:14.613870 kernel: pci f14b:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 30 00:36:14.613968 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 00:36:14.613976 kernel: pci f14b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:14.614061 kernel: hv_vmbus: registering driver hv_utils Apr 30 00:36:14.614072 kernel: pci f14b:00:02.0: enabling Extended Tags Apr 30 00:36:14.615334 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 00:36:14.615456 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:36:14.615465 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 00:36:14.615473 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 00:36:14.615481 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 00:36:14.615567 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 00:36:14.615579 kernel: pci f14b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f14b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 30 00:36:14.615666 kernel: pci_bus f14b:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 00:36:14.615744 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 00:36:14.639552 kernel: pci f14b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 30 00:36:14.642284 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 00:36:14.642390 systemd-journald[217]: Time jumped backwards, rotating. Apr 30 00:36:14.642433 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:36:14.642521 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 00:36:14.642606 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 00:36:14.642688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:14.642697 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:36:14.645033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:14.657914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:14.715781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:14.740785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:14.773904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:14.568009 systemd-resolved[256]: Clock change detected. Flushing caches. Apr 30 00:36:14.623623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:14.700843 kernel: mlx5_core f14b:00:02.0: enabling device (0000 -> 0002) Apr 30 00:36:14.919387 kernel: mlx5_core f14b:00:02.0: firmware version: 16.30.1284 Apr 30 00:36:14.919555 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: VF registering: eth1 Apr 30 00:36:14.919664 kernel: mlx5_core f14b:00:02.0 eth1: joined to eth0 Apr 30 00:36:14.919770 kernel: mlx5_core f14b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 30 00:36:14.927173 kernel: mlx5_core f14b:00:02.0 enP61771s1: renamed from eth1 Apr 30 00:36:15.147069 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 00:36:15.270179 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (497) Apr 30 00:36:15.283406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:36:15.303124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 00:36:15.335181 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (491) Apr 30 00:36:15.347928 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 00:36:15.354865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 00:36:15.392336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:36:16.423094 disk-uuid[597]: The operation has completed successfully. Apr 30 00:36:16.428256 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:16.481671 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:36:16.483175 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:36:16.511276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:36:16.524148 sh[713]: Success Apr 30 00:36:16.560238 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:36:16.806248 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:36:16.822271 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:36:16.829479 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:36:16.860099 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:36:16.860180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:16.867077 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:36:16.872176 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:36:16.876549 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:36:17.179735 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:36:17.185337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:36:17.206429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:36:17.218480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:36:17.242204 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:17.242239 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:17.253182 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:17.283104 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:17.290949 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:36:17.304719 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:17.315198 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:36:17.332339 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:36:17.346811 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:17.369373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:17.395585 systemd-networkd[897]: lo: Link UP Apr 30 00:36:17.395596 systemd-networkd[897]: lo: Gained carrier Apr 30 00:36:17.397171 systemd-networkd[897]: Enumeration completed Apr 30 00:36:17.397765 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:17.397768 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:17.399377 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:17.409105 systemd[1]: Reached target network.target - Network. Apr 30 00:36:17.468171 kernel: mlx5_core f14b:00:02.0 enP61771s1: Link up Apr 30 00:36:17.509171 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: Data path switched to VF: enP61771s1 Apr 30 00:36:17.509598 systemd-networkd[897]: enP61771s1: Link UP Apr 30 00:36:17.509684 systemd-networkd[897]: eth0: Link UP Apr 30 00:36:17.509801 systemd-networkd[897]: eth0: Gained carrier Apr 30 00:36:17.509809 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:17.534447 systemd-networkd[897]: enP61771s1: Gained carrier Apr 30 00:36:17.547201 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:18.343602 ignition[874]: Ignition 2.19.0 Apr 30 00:36:18.343615 ignition[874]: Stage: fetch-offline Apr 30 00:36:18.345633 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:18.343648 ignition[874]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.366288 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:36:18.343656 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.343757 ignition[874]: parsed url from cmdline: "" Apr 30 00:36:18.343760 ignition[874]: no config URL provided Apr 30 00:36:18.343764 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:18.343771 ignition[874]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:18.343776 ignition[874]: failed to fetch config: resource requires networking Apr 30 00:36:18.343944 ignition[874]: Ignition finished successfully Apr 30 00:36:18.390440 ignition[907]: Ignition 2.19.0 Apr 30 00:36:18.390447 ignition[907]: Stage: fetch Apr 30 00:36:18.390594 ignition[907]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.390603 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.390702 ignition[907]: parsed url from cmdline: "" Apr 30 00:36:18.390708 ignition[907]: no config URL provided Apr 30 00:36:18.390712 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:18.390719 ignition[907]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:18.390738 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 00:36:18.472038 ignition[907]: GET result: OK Apr 30 00:36:18.472126 ignition[907]: config has been read from IMDS userdata Apr 30 00:36:18.472204 ignition[907]: parsing config with SHA512: 4ee0bb0e991b78c2ca35711a882532924fdaacbd96bcf72a1a8c962a82259e5a818a903af47822f3c8b54447c0c55c4346311a8302c9f2b15a6857f10e4ac02a Apr 30 00:36:18.475707 unknown[907]: fetched base config from "system" Apr 30 00:36:18.476076 ignition[907]: fetch: fetch complete Apr 30 00:36:18.475714 unknown[907]: fetched base config from "system" Apr 30 00:36:18.476081 ignition[907]: fetch: fetch passed Apr 30 00:36:18.475719 unknown[907]: fetched user config from "azure" Apr 30 00:36:18.476118 ignition[907]: Ignition finished successfully Apr 30 00:36:18.485058 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:36:18.514315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:36:18.530022 ignition[913]: Ignition 2.19.0 Apr 30 00:36:18.530032 ignition[913]: Stage: kargs Apr 30 00:36:18.536424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:36:18.530255 ignition[913]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.530265 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.531132 ignition[913]: kargs: kargs passed Apr 30 00:36:18.553426 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:36:18.531191 ignition[913]: Ignition finished successfully Apr 30 00:36:18.575689 ignition[920]: Ignition 2.19.0 Apr 30 00:36:18.579433 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:36:18.575697 ignition[920]: Stage: disks Apr 30 00:36:18.586770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:18.575899 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.595878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:18.575908 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:18.608021 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:18.577489 ignition[920]: disks: disks passed Apr 30 00:36:18.616381 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:18.577543 ignition[920]: Ignition finished successfully Apr 30 00:36:18.627842 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:18.655388 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:36:18.678276 systemd-networkd[897]: eth0: Gained IPv6LL Apr 30 00:36:18.783095 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 00:36:18.794226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:36:18.815343 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:36:18.867394 kernel: EXT4-fs (sda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:36:18.867825 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:36:18.872883 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:18.922259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:18.932690 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:36:18.942324 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:36:18.959023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:36:19.003100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Apr 30 00:36:19.003124 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:19.003133 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:19.003143 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:18.959061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:18.980063 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:36:19.020410 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:36:19.037523 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:19.032890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:19.380283 systemd-networkd[897]: enP61771s1: Gained IPv6LL Apr 30 00:36:19.577297 coreos-metadata[941]: Apr 30 00:36:19.577 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:36:19.585842 coreos-metadata[941]: Apr 30 00:36:19.581 INFO Fetch successful Apr 30 00:36:19.585842 coreos-metadata[941]: Apr 30 00:36:19.581 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:36:19.601986 coreos-metadata[941]: Apr 30 00:36:19.592 INFO Fetch successful Apr 30 00:36:19.610555 coreos-metadata[941]: Apr 30 00:36:19.610 INFO wrote hostname ci-4081.3.3-a-8ba35441fd to /sysroot/etc/hostname Apr 30 00:36:19.619714 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:19.818141 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:36:19.884332 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:36:19.907138 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:36:19.916763 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:36:20.828032 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:20.845405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:36:20.861140 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:36:20.877388 kernel: BTRFS info (device sda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:20.872115 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:36:20.890568 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:36:20.913311 ignition[1058]: INFO : Ignition 2.19.0 Apr 30 00:36:20.913311 ignition[1058]: INFO : Stage: mount Apr 30 00:36:20.922319 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:20.922319 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:20.922319 ignition[1058]: INFO : mount: mount passed Apr 30 00:36:20.922319 ignition[1058]: INFO : Ignition finished successfully Apr 30 00:36:20.918922 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:36:20.944045 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:36:20.953403 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:20.997713 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1068) Apr 30 00:36:20.997772 kernel: BTRFS info (device sda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:36:21.003787 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:36:21.008146 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:21.015185 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:21.016137 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:21.043828 ignition[1085]: INFO : Ignition 2.19.0 Apr 30 00:36:21.043828 ignition[1085]: INFO : Stage: files Apr 30 00:36:21.043828 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:21.043828 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:21.043828 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:36:21.086150 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:36:21.086150 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:36:21.226673 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:36:21.234358 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:36:21.234358 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:36:21.227118 unknown[1085]: wrote ssh authorized keys file for user: core Apr 30 00:36:21.274473 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:21.285496 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:36:21.354726 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:36:21.482969 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:21.494806 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:36:21.962894 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:36:22.180527 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:36:22.180527 ignition[1085]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:36:22.235226 ignition[1085]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:22.247123 ignition[1085]: INFO : files: files passed Apr 30 00:36:22.247123 ignition[1085]: INFO : Ignition finished successfully Apr 30 00:36:22.247623 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:36:22.285585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:36:22.300346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:36:22.369095 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.369095 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.325915 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:36:22.397406 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:22.326001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:36:22.354538 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:22.362430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:36:22.398369 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:36:22.434486 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:36:22.434622 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:36:22.447633 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:36:22.458001 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:36:22.471042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:36:22.490396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:36:22.511035 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:22.526424 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:36:22.544428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:22.551410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:22.564247 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:36:22.575729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:36:22.575894 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:22.592174 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:36:22.598581 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:36:22.610360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:36:22.621728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:22.632772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:22.644638 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:36:22.656425 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:22.669735 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:36:22.680739 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:36:22.693113 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:36:22.703021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:36:22.703207 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:22.718432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:22.729563 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:22.741785 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:36:22.747259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:22.754498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:36:22.754668 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:22.771834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:36:22.771998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:22.786623 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:36:22.786780 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:36:22.797402 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:36:22.797555 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:22.835269 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:36:22.864096 ignition[1137]: INFO : Ignition 2.19.0 Apr 30 00:36:22.864096 ignition[1137]: INFO : Stage: umount Apr 30 00:36:22.864096 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:22.864096 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 00:36:22.864096 ignition[1137]: INFO : umount: umount passed Apr 30 00:36:22.864096 ignition[1137]: INFO : Ignition finished successfully Apr 30 00:36:22.859120 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:36:22.871907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:36:22.872061 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:22.884190 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:36:22.884309 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:22.898627 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:36:22.898730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:36:22.918504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:36:22.919031 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:36:22.919128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:36:22.926849 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:36:22.926901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:36:22.937619 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:36:22.937670 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:36:22.948788 systemd[1]: Stopped target network.target - Network. Apr 30 00:36:22.960111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:36:22.960184 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:22.975261 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:36:22.987514 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:36:23.002940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:23.009821 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:36:23.020470 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:36:23.025725 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:36:23.025776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:23.037087 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:36:23.037132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:23.047879 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:36:23.047927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:36:23.058851 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:36:23.058889 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:23.071776 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:36:23.082856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:36:23.087200 systemd-networkd[897]: eth0: DHCPv6 lease lost Apr 30 00:36:23.100733 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:36:23.100825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:36:23.107327 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:36:23.107395 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:36:23.329361 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: Data path switched from VF: enP61771s1 Apr 30 00:36:23.123886 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:36:23.123937 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:23.150341 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:36:23.160802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:36:23.160890 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:23.173844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:23.193867 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:36:23.193969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:36:23.212816 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:36:23.215067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:23.238345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:36:23.238426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:23.249970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:36:23.250018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:23.263052 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:36:23.263110 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:23.284782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:36:23.284848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:23.301115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:23.301208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:23.342379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:36:23.356207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:36:23.356272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:23.368292 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:36:23.368343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:23.382080 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:36:23.382124 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:23.396954 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:36:23.396995 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:23.410066 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:36:23.410110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:23.422772 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:36:23.422814 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:23.430737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:23.430784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:23.443539 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:36:23.443646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:36:23.457852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:36:23.457938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:36:23.528721 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:36:23.528864 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:36:23.537746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:36:23.548045 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:36:23.548109 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:23.577372 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:36:23.597648 systemd[1]: Switching root. Apr 30 00:36:23.766780 systemd-journald[217]: Journal stopped Apr 30 00:36:28.114058 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 30 00:36:28.114081 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:36:28.114092 kernel: SELinux: policy capability open_perms=1 Apr 30 00:36:28.114102 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:36:28.114109 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:36:28.114117 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:36:28.114125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:36:28.114135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:36:28.114143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:36:28.114163 kernel: audit: type=1403 audit(1745973384.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:36:28.114178 systemd[1]: Successfully loaded SELinux policy in 200.892ms. Apr 30 00:36:28.114188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.629ms. Apr 30 00:36:28.114199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:28.114208 systemd[1]: Detected virtualization microsoft. Apr 30 00:36:28.114217 systemd[1]: Detected architecture arm64. Apr 30 00:36:28.114228 systemd[1]: Detected first boot. Apr 30 00:36:28.114237 systemd[1]: Hostname set to . Apr 30 00:36:28.114246 systemd[1]: Initializing machine ID from random generator. Apr 30 00:36:28.114255 zram_generator::config[1179]: No configuration found. Apr 30 00:36:28.114264 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:36:28.114273 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:36:28.114284 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:36:28.114293 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:36:28.114302 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:36:28.114311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:36:28.114321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:36:28.114330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:36:28.114339 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:36:28.114351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:36:28.114361 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:36:28.114370 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:36:28.114379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:28.114388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:28.114397 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:36:28.114406 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:36:28.114416 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:36:28.114425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:28.114436 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:36:28.114445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:28.114454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:36:28.114465 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:36:28.114475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:28.114485 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:36:28.114494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:28.114505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:28.114515 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:28.114524 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:28.114533 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:36:28.114543 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:36:28.114553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:28.114562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:28.114574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:28.114584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:36:28.114593 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:36:28.114603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:36:28.114612 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:36:28.114622 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:36:28.114633 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:36:28.114643 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:36:28.114653 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:36:28.114662 systemd[1]: Reached target machines.target - Containers. Apr 30 00:36:28.114672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:36:28.114682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:28.114691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:28.114701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:36:28.114712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:28.114721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:36:28.114731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:28.114740 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:36:28.114750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:28.114761 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:36:28.114770 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:36:28.114780 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:36:28.114789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:36:28.114800 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:36:28.114810 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:28.114819 kernel: fuse: init (API version 7.39) Apr 30 00:36:28.114828 kernel: loop: module loaded Apr 30 00:36:28.114836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:28.114846 kernel: ACPI: bus type drm_connector registered Apr 30 00:36:28.114855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:36:28.114864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:36:28.114874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:28.114901 systemd-journald[1279]: Collecting audit messages is disabled. Apr 30 00:36:28.114921 systemd-journald[1279]: Journal started Apr 30 00:36:28.114942 systemd-journald[1279]: Runtime Journal (/run/log/journal/d7766a029efd47a8a270a8cd2fc9855b) is 8.0M, max 78.5M, 70.5M free. Apr 30 00:36:27.009638 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:36:27.170997 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:36:27.171380 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:36:27.171673 systemd[1]: systemd-journald.service: Consumed 3.151s CPU time. Apr 30 00:36:28.126309 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:36:28.126345 systemd[1]: Stopped verity-setup.service. Apr 30 00:36:28.145226 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:28.145993 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:36:28.152502 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:36:28.159035 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:36:28.164701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:36:28.171048 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:36:28.177943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:36:28.183665 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:36:28.190400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:28.197986 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:36:28.198123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:36:28.204897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:28.205027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:28.211674 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:36:28.211797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:36:28.218030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:28.218168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:28.225429 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:36:28.225552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:36:28.231977 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:28.232096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:28.238945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:28.246074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:36:28.253774 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:36:28.262670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:28.278422 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:36:28.289239 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:36:28.299288 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:36:28.306037 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:36:28.306076 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:28.313076 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:36:28.325415 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:36:28.332909 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:36:28.338663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:36:28.379327 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:36:28.386389 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:36:28.392817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:36:28.393942 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:36:28.400059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:36:28.402406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:28.411351 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:36:28.422309 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:28.444722 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:36:28.453822 systemd-journald[1279]: Time spent on flushing to /var/log/journal/d7766a029efd47a8a270a8cd2fc9855b is 64.631ms for 894 entries. Apr 30 00:36:28.453822 systemd-journald[1279]: System Journal (/var/log/journal/d7766a029efd47a8a270a8cd2fc9855b) is 11.8M, max 2.6G, 2.6G free. Apr 30 00:36:28.588953 systemd-journald[1279]: Received client request to flush runtime journal. Apr 30 00:36:28.589004 kernel: loop0: detected capacity change from 0 to 31320 Apr 30 00:36:28.589023 systemd-journald[1279]: /var/log/journal/d7766a029efd47a8a270a8cd2fc9855b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Apr 30 00:36:28.589050 systemd-journald[1279]: Rotating system journal. Apr 30 00:36:28.456973 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:36:28.473331 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:36:28.483182 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:36:28.507900 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:36:28.521383 udevadm[1316]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:36:28.523793 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:36:28.538414 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:36:28.548063 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:28.560678 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Apr 30 00:36:28.560688 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Apr 30 00:36:28.567791 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:28.580352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:36:28.590425 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:36:28.632936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:36:28.634484 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:36:28.812465 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:36:28.821087 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:36:28.830288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:28.846908 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Apr 30 00:36:28.846923 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Apr 30 00:36:28.851111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:28.881183 kernel: loop1: detected capacity change from 0 to 114328 Apr 30 00:36:29.214395 kernel: loop2: detected capacity change from 0 to 194096 Apr 30 00:36:29.279177 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:36:29.564291 kernel: loop4: detected capacity change from 0 to 31320 Apr 30 00:36:29.572332 kernel: loop5: detected capacity change from 0 to 114328 Apr 30 00:36:29.575148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:36:29.589194 kernel: loop6: detected capacity change from 0 to 194096 Apr 30 00:36:29.603110 kernel: loop7: detected capacity change from 0 to 114432 Apr 30 00:36:29.598541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:29.607855 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 00:36:29.608265 (sd-merge)[1343]: Merged extensions into '/usr'. Apr 30 00:36:29.614034 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:36:29.614143 systemd[1]: Reloading... Apr 30 00:36:29.620181 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Apr 30 00:36:29.705294 zram_generator::config[1370]: No configuration found. Apr 30 00:36:29.861202 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:36:29.861286 kernel: hv_vmbus: registering driver hv_balloon Apr 30 00:36:29.872911 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 00:36:29.873000 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 30 00:36:29.935800 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 00:36:29.935887 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 00:36:29.946734 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 00:36:29.954833 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:36:29.956204 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:36:29.975298 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1375) Apr 30 00:36:29.975797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:30.049277 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:36:30.049588 systemd[1]: Reloading finished in 435 ms. Apr 30 00:36:30.074567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:30.082057 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:36:30.118636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 00:36:30.139343 systemd[1]: Starting ensure-sysext.service... Apr 30 00:36:30.146355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:36:30.157291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:30.167335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:30.180316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:30.190233 systemd[1]: Reloading requested from client PID 1499 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:36:30.190252 systemd[1]: Reloading... Apr 30 00:36:30.201515 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:36:30.201767 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:36:30.202456 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:36:30.202666 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Apr 30 00:36:30.202863 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Apr 30 00:36:30.211600 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:36:30.211608 systemd-tmpfiles[1502]: Skipping /boot Apr 30 00:36:30.225234 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:36:30.225243 systemd-tmpfiles[1502]: Skipping /boot Apr 30 00:36:30.272191 zram_generator::config[1537]: No configuration found. Apr 30 00:36:30.371970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:30.450807 systemd[1]: Reloading finished in 260 ms. Apr 30 00:36:30.465208 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:36:30.476633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:36:30.483966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:30.503977 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:36:30.514381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:36:30.523436 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:36:30.532407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:36:30.543462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:30.554509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:36:30.562709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:36:30.578029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:30.585324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:30.593775 lvm[1606]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:36:30.608391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:36:30.618388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:30.633518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:30.644402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:36:30.644601 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:36:30.659252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:36:30.669028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:30.676559 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:36:30.686341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:30.686506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:30.695030 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:36:30.696245 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:36:30.707877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:30.708225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:30.717759 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:30.717901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:30.728754 systemd[1]: Finished ensure-sysext.service. Apr 30 00:36:30.739416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:36:30.748545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:30.756994 systemd-resolved[1608]: Positive Trust Anchors: Apr 30 00:36:30.757510 systemd-resolved[1608]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:30.757605 systemd-resolved[1608]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:30.758168 augenrules[1630]: No rules Apr 30 00:36:30.759369 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:36:30.766504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:36:30.766576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:36:30.768182 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:36:30.780905 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:36:30.806716 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:36:30.811324 systemd-resolved[1608]: Using system hostname 'ci-4081.3.3-a-8ba35441fd'. Apr 30 00:36:30.814116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:30.820635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:30.831200 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:36:30.952820 systemd-networkd[1501]: lo: Link UP Apr 30 00:36:30.952831 systemd-networkd[1501]: lo: Gained carrier Apr 30 00:36:30.955082 systemd-networkd[1501]: Enumeration completed Apr 30 00:36:30.955235 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:30.956447 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:30.956548 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:30.961589 systemd[1]: Reached target network.target - Network. Apr 30 00:36:30.972347 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:36:31.010171 kernel: mlx5_core f14b:00:02.0 enP61771s1: Link up Apr 30 00:36:31.036243 kernel: hv_netvsc 002248be-d99d-0022-48be-d99d002248be eth0: Data path switched to VF: enP61771s1 Apr 30 00:36:31.036397 systemd-networkd[1501]: enP61771s1: Link UP Apr 30 00:36:31.036489 systemd-networkd[1501]: eth0: Link UP Apr 30 00:36:31.036492 systemd-networkd[1501]: eth0: Gained carrier Apr 30 00:36:31.036506 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:31.041406 systemd-networkd[1501]: enP61771s1: Gained carrier Apr 30 00:36:31.050190 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:31.139641 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:36:31.146958 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:36:32.628307 systemd-networkd[1501]: enP61771s1: Gained IPv6LL Apr 30 00:36:32.692269 systemd-networkd[1501]: eth0: Gained IPv6LL Apr 30 00:36:32.695221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:36:32.702937 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:36:35.411215 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:36:35.426908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:36:35.438373 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:36:35.452575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:36:35.459444 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:35.465539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:36:35.473085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:36:35.480195 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:36:35.486111 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:36:35.492894 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:36:35.499749 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:36:35.499784 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:35.504722 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:35.549264 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:36:35.556829 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:36:35.566844 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:36:35.573015 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:36:35.578901 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:35.584149 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:35.589172 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:36:35.589205 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:36:35.591096 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 00:36:35.598287 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:36:35.615311 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:36:35.624391 (chronyd)[1656]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 00:36:35.625062 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:36:35.631318 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:36:35.638052 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:36:35.646315 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:36:35.646358 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 00:36:35.647807 chronyd[1666]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 00:36:35.649720 chronyd[1666]: Timezone right/UTC failed leap second check, ignoring Apr 30 00:36:35.649993 chronyd[1666]: Loaded seccomp filter (level 2) Apr 30 00:36:35.654307 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 00:36:35.656435 KVP[1665]: KVP starting; pid is:1665 Apr 30 00:36:35.660791 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 00:36:35.661859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:35.671519 KVP[1665]: KVP LIC Version: 3.1 Apr 30 00:36:35.672277 kernel: hv_utils: KVP IC version 4.0 Apr 30 00:36:35.672775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:36:35.680780 jq[1662]: false Apr 30 00:36:35.687327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:36:35.694486 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:36:35.702055 extend-filesystems[1663]: Found loop4 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found loop5 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found loop6 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found loop7 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda1 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda2 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda3 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found usr Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda4 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda6 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda7 Apr 30 00:36:35.707667 extend-filesystems[1663]: Found sda9 Apr 30 00:36:35.707667 extend-filesystems[1663]: Checking size of /dev/sda9 Apr 30 00:36:35.706807 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:36:35.787047 dbus-daemon[1661]: [system] SELinux support is enabled Apr 30 00:36:35.843813 extend-filesystems[1663]: Old size kept for /dev/sda9 Apr 30 00:36:35.843813 extend-filesystems[1663]: Found sr0 Apr 30 00:36:35.904374 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1704) Apr 30 00:36:35.715550 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:36:35.742402 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:36:35.758080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:36:35.760064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:36:35.764766 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:36:35.904926 update_engine[1689]: I20250430 00:36:35.890084 1689 main.cc:92] Flatcar Update Engine starting Apr 30 00:36:35.904926 update_engine[1689]: I20250430 00:36:35.896845 1689 update_check_scheduler.cc:74] Next update check in 11m58s Apr 30 00:36:35.788330 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:36:35.906362 jq[1694]: true Apr 30 00:36:35.800902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:36:35.813126 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 00:36:35.839110 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:36:35.839284 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:36:35.839528 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:36:35.839672 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:36:35.863488 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:36:35.863653 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:36:35.875250 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:36:35.891507 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:36:35.891694 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:36:35.928005 systemd-logind[1684]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:36:35.930258 systemd-logind[1684]: New seat seat0. Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.956 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.961 INFO Fetch successful Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.961 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.965 INFO Fetch successful Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.965 INFO Fetching http://168.63.129.16/machine/a3807b32-7cc8-4a40-99ae-e6754fad6a83/525a7998%2D89e6%2D4092%2D9f4a%2D632942d9bff3.%5Fci%2D4081.3.3%2Da%2D8ba35441fd?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.967 INFO Fetch successful Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.967 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 00:36:35.987886 coreos-metadata[1658]: Apr 30 00:36:35.980 INFO Fetch successful Apr 30 00:36:35.981801 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:36:35.992974 jq[1718]: true Apr 30 00:36:36.015010 (ntainerd)[1740]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:36:36.037414 tar[1717]: linux-arm64/helm Apr 30 00:36:36.038671 dbus-daemon[1661]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:36:36.061956 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:36:36.076943 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:36:36.089855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:36:36.090064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:36:36.090199 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:36:36.101607 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:36:36.101730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:36:36.123398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:36:36.155053 bash[1775]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:36:36.168107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:36:36.180373 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:36:36.388248 locksmithd[1777]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:36:36.729472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:36.752771 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:36.756478 tar[1717]: linux-arm64/LICENSE Apr 30 00:36:36.756478 tar[1717]: linux-arm64/README.md Apr 30 00:36:36.769518 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:36:36.805483 containerd[1740]: time="2025-04-30T00:36:36.805405020Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:36:36.857750 containerd[1740]: time="2025-04-30T00:36:36.857703060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.863945 containerd[1740]: time="2025-04-30T00:36:36.863904420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:36.864042 containerd[1740]: time="2025-04-30T00:36:36.864027580Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:36:36.864518 containerd[1740]: time="2025-04-30T00:36:36.864498820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:36:36.865909 containerd[1740]: time="2025-04-30T00:36:36.865885740Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:36:36.866003 containerd[1740]: time="2025-04-30T00:36:36.865988180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.866611 containerd[1740]: time="2025-04-30T00:36:36.866585940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:36.866699 containerd[1740]: time="2025-04-30T00:36:36.866685300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.867471 containerd[1740]: time="2025-04-30T00:36:36.867440900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:36.867554 containerd[1740]: time="2025-04-30T00:36:36.867540820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.867653 containerd[1740]: time="2025-04-30T00:36:36.867638460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:36.868406 containerd[1740]: time="2025-04-30T00:36:36.868179740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.868406 containerd[1740]: time="2025-04-30T00:36:36.868289940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.868914 containerd[1740]: time="2025-04-30T00:36:36.868878460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:36.869788 containerd[1740]: time="2025-04-30T00:36:36.869765300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:36.869885 containerd[1740]: time="2025-04-30T00:36:36.869869260Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:36:36.870104 containerd[1740]: time="2025-04-30T00:36:36.870044540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:36:36.871176 containerd[1740]: time="2025-04-30T00:36:36.870251220Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:36:36.892000 containerd[1740]: time="2025-04-30T00:36:36.891962460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:36:36.892114 containerd[1740]: time="2025-04-30T00:36:36.892101180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:36:36.892343 containerd[1740]: time="2025-04-30T00:36:36.892326100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:36:36.892770 containerd[1740]: time="2025-04-30T00:36:36.892400420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:36:36.892770 containerd[1740]: time="2025-04-30T00:36:36.892421540Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:36:36.892770 containerd[1740]: time="2025-04-30T00:36:36.892572860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:36:36.893438 containerd[1740]: time="2025-04-30T00:36:36.893415820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:36:36.893650 containerd[1740]: time="2025-04-30T00:36:36.893619980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:36:36.893721 containerd[1740]: time="2025-04-30T00:36:36.893707500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:36:36.894665 containerd[1740]: time="2025-04-30T00:36:36.894643380Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:36:36.894757 containerd[1740]: time="2025-04-30T00:36:36.894743220Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.894855 containerd[1740]: time="2025-04-30T00:36:36.894840380Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.894927 containerd[1740]: time="2025-04-30T00:36:36.894911100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.894994 containerd[1740]: time="2025-04-30T00:36:36.894968780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.895076 containerd[1740]: time="2025-04-30T00:36:36.895041340Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.895145 containerd[1740]: time="2025-04-30T00:36:36.895123700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895199580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895230660Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895259460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895277260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895290500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895309020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895329580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895347860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895370900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895387100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895403940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895422980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895436220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.895901 containerd[1740]: time="2025-04-30T00:36:36.895454060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895475620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895496060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895521300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895537700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895553340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895610940Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895635780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895647140Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895662740Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895676180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895691260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895701860Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:36:36.896192 containerd[1740]: time="2025-04-30T00:36:36.895716340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:36:36.897338 containerd[1740]: time="2025-04-30T00:36:36.897261700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:36:36.897338 containerd[1740]: time="2025-04-30T00:36:36.897337700Z" level=info msg="Connect containerd service" Apr 30 00:36:36.897484 containerd[1740]: time="2025-04-30T00:36:36.897373300Z" level=info msg="using legacy CRI server" Apr 30 00:36:36.897484 containerd[1740]: time="2025-04-30T00:36:36.897381140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:36:36.897484 containerd[1740]: time="2025-04-30T00:36:36.897460860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:36:36.898071 containerd[1740]: time="2025-04-30T00:36:36.898040020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898195940Z" level=info msg="Start subscribing containerd event" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898254620Z" level=info msg="Start recovering state" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898311780Z" level=info msg="Start event monitor" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898322100Z" level=info msg="Start snapshots syncer" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898335340Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:36:36.898595 containerd[1740]: time="2025-04-30T00:36:36.898343060Z" level=info msg="Start streaming server" Apr 30 00:36:36.899309 containerd[1740]: time="2025-04-30T00:36:36.899280260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:36:36.899350 containerd[1740]: time="2025-04-30T00:36:36.899325700Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:36:36.905346 containerd[1740]: time="2025-04-30T00:36:36.899374660Z" level=info msg="containerd successfully booted in 0.096082s" Apr 30 00:36:36.899456 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:36:37.206706 kubelet[1793]: E0430 00:36:37.206625 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:37.209682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:37.209807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:38.148964 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:36:38.166264 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:36:38.180686 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:36:38.187033 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 00:36:38.193078 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:36:38.193663 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:36:38.209308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:36:38.226369 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 00:36:38.255596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:36:38.269447 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:36:38.276336 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:36:38.283139 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:36:38.288618 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:36:38.294612 systemd[1]: Startup finished in 671ms (kernel) + 12.132s (initrd) + 13.629s (userspace) = 26.433s. Apr 30 00:36:38.516719 login[1831]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 30 00:36:38.517354 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:36:38.525573 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:36:38.538384 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:36:38.543210 systemd-logind[1684]: New session 1 of user core. Apr 30 00:36:38.550254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:36:38.562442 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:36:38.564860 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:36:38.682995 systemd[1838]: Queued start job for default target default.target. Apr 30 00:36:38.690424 systemd[1838]: Created slice app.slice - User Application Slice. Apr 30 00:36:38.690456 systemd[1838]: Reached target paths.target - Paths. Apr 30 00:36:38.690468 systemd[1838]: Reached target timers.target - Timers. Apr 30 00:36:38.691640 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:36:38.704100 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:36:38.704228 systemd[1838]: Reached target sockets.target - Sockets. Apr 30 00:36:38.704242 systemd[1838]: Reached target basic.target - Basic System. Apr 30 00:36:38.704275 systemd[1838]: Reached target default.target - Main User Target. Apr 30 00:36:38.704300 systemd[1838]: Startup finished in 134ms. Apr 30 00:36:38.704519 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:36:38.712116 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:36:39.518267 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:36:39.523591 systemd-logind[1684]: New session 2 of user core. Apr 30 00:36:39.533376 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:36:39.838367 waagent[1827]: 2025-04-30T00:36:39.838209Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 00:36:39.844405 waagent[1827]: 2025-04-30T00:36:39.844349Z INFO Daemon Daemon OS: flatcar 4081.3.3 Apr 30 00:36:39.849477 waagent[1827]: 2025-04-30T00:36:39.849428Z INFO Daemon Daemon Python: 3.11.9 Apr 30 00:36:39.853791 waagent[1827]: 2025-04-30T00:36:39.853731Z INFO Daemon Daemon Run daemon Apr 30 00:36:39.857787 waagent[1827]: 2025-04-30T00:36:39.857731Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.3' Apr 30 00:36:39.866678 waagent[1827]: 2025-04-30T00:36:39.866631Z INFO Daemon Daemon Using waagent for provisioning Apr 30 00:36:39.871873 waagent[1827]: 2025-04-30T00:36:39.871831Z INFO Daemon Daemon Activate resource disk Apr 30 00:36:39.876472 waagent[1827]: 2025-04-30T00:36:39.876429Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 00:36:39.888055 waagent[1827]: 2025-04-30T00:36:39.888004Z INFO Daemon Daemon Found device: None Apr 30 00:36:39.892841 waagent[1827]: 2025-04-30T00:36:39.892798Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 00:36:39.901892 waagent[1827]: 2025-04-30T00:36:39.901840Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 00:36:39.917431 waagent[1827]: 2025-04-30T00:36:39.917372Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:36:39.923293 waagent[1827]: 2025-04-30T00:36:39.923247Z INFO Daemon Daemon Running default provisioning handler Apr 30 00:36:39.935582 waagent[1827]: 2025-04-30T00:36:39.935524Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 00:36:39.949399 waagent[1827]: 2025-04-30T00:36:39.949345Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 00:36:39.959309 waagent[1827]: 2025-04-30T00:36:39.959228Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 00:36:39.964663 waagent[1827]: 2025-04-30T00:36:39.964616Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 00:36:40.107058 waagent[1827]: 2025-04-30T00:36:40.106908Z INFO Daemon Daemon Successfully mounted dvd Apr 30 00:36:40.148941 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 00:36:40.150292 waagent[1827]: 2025-04-30T00:36:40.149997Z INFO Daemon Daemon Detect protocol endpoint Apr 30 00:36:40.155032 waagent[1827]: 2025-04-30T00:36:40.154982Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 00:36:40.160718 waagent[1827]: 2025-04-30T00:36:40.160675Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 00:36:40.167849 waagent[1827]: 2025-04-30T00:36:40.167801Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 00:36:40.173539 waagent[1827]: 2025-04-30T00:36:40.173492Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 00:36:40.178779 waagent[1827]: 2025-04-30T00:36:40.178735Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 00:36:40.213405 waagent[1827]: 2025-04-30T00:36:40.213359Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 00:36:40.220161 waagent[1827]: 2025-04-30T00:36:40.220122Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 00:36:40.225434 waagent[1827]: 2025-04-30T00:36:40.225393Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 00:36:40.352393 waagent[1827]: 2025-04-30T00:36:40.352287Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 00:36:40.359099 waagent[1827]: 2025-04-30T00:36:40.359000Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 00:36:40.368799 waagent[1827]: 2025-04-30T00:36:40.368749Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:36:40.390474 waagent[1827]: 2025-04-30T00:36:40.390431Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Apr 30 00:36:40.396353 waagent[1827]: 2025-04-30T00:36:40.396309Z INFO Daemon Apr 30 00:36:40.399337 waagent[1827]: 2025-04-30T00:36:40.399288Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6066f1f2-2326-44ce-b987-dd715892bc98 eTag: 10329681405067462521 source: Fabric] Apr 30 00:36:40.410884 waagent[1827]: 2025-04-30T00:36:40.410840Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 00:36:40.417939 waagent[1827]: 2025-04-30T00:36:40.417892Z INFO Daemon Apr 30 00:36:40.420815 waagent[1827]: 2025-04-30T00:36:40.420775Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:36:40.431617 waagent[1827]: 2025-04-30T00:36:40.431580Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 00:36:40.510475 waagent[1827]: 2025-04-30T00:36:40.510401Z INFO Daemon Downloaded certificate {'thumbprint': 'CCECBF30C8923E75CEA946E2894DFA12E9AA3BCD', 'hasPrivateKey': True} Apr 30 00:36:40.520431 waagent[1827]: 2025-04-30T00:36:40.520385Z INFO Daemon Downloaded certificate {'thumbprint': '3238E2427C80AB9634C089CD731A635610FD8C98', 'hasPrivateKey': False} Apr 30 00:36:40.530655 waagent[1827]: 2025-04-30T00:36:40.530611Z INFO Daemon Fetch goal state completed Apr 30 00:36:40.542109 waagent[1827]: 2025-04-30T00:36:40.542038Z INFO Daemon Daemon Starting provisioning Apr 30 00:36:40.547272 waagent[1827]: 2025-04-30T00:36:40.547224Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 00:36:40.551875 waagent[1827]: 2025-04-30T00:36:40.551835Z INFO Daemon Daemon Set hostname [ci-4081.3.3-a-8ba35441fd] Apr 30 00:36:40.574186 waagent[1827]: 2025-04-30T00:36:40.573475Z INFO Daemon Daemon Publish hostname [ci-4081.3.3-a-8ba35441fd] Apr 30 00:36:40.580041 waagent[1827]: 2025-04-30T00:36:40.579989Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 00:36:40.586388 waagent[1827]: 2025-04-30T00:36:40.586345Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 00:36:40.614197 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:40.614206 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:40.614248 systemd-networkd[1501]: eth0: DHCP lease lost Apr 30 00:36:40.615372 waagent[1827]: 2025-04-30T00:36:40.615133Z INFO Daemon Daemon Create user account if not exists Apr 30 00:36:40.620881 waagent[1827]: 2025-04-30T00:36:40.620833Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 00:36:40.626577 waagent[1827]: 2025-04-30T00:36:40.626536Z INFO Daemon Daemon Configure sudoer Apr 30 00:36:40.631133 waagent[1827]: 2025-04-30T00:36:40.631084Z INFO Daemon Daemon Configure sshd Apr 30 00:36:40.635215 systemd-networkd[1501]: eth0: DHCPv6 lease lost Apr 30 00:36:40.635645 waagent[1827]: 2025-04-30T00:36:40.635588Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 00:36:40.648373 waagent[1827]: 2025-04-30T00:36:40.648328Z INFO Daemon Daemon Deploy ssh public key. Apr 30 00:36:40.664200 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 30 00:36:41.796065 waagent[1827]: 2025-04-30T00:36:41.796005Z INFO Daemon Daemon Provisioning complete Apr 30 00:36:41.813597 waagent[1827]: 2025-04-30T00:36:41.813549Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 00:36:41.821153 waagent[1827]: 2025-04-30T00:36:41.821106Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 00:36:41.831327 waagent[1827]: 2025-04-30T00:36:41.831283Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 00:36:41.956083 waagent[1893]: 2025-04-30T00:36:41.956010Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 00:36:41.959207 waagent[1893]: 2025-04-30T00:36:41.958679Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.3 Apr 30 00:36:41.959207 waagent[1893]: 2025-04-30T00:36:41.958773Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 30 00:36:41.981187 waagent[1893]: 2025-04-30T00:36:41.981078Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 00:36:41.981379 waagent[1893]: 2025-04-30T00:36:41.981336Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:36:41.981442 waagent[1893]: 2025-04-30T00:36:41.981412Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:36:41.989300 waagent[1893]: 2025-04-30T00:36:41.989246Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 00:36:41.999090 waagent[1893]: 2025-04-30T00:36:41.999047Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Apr 30 00:36:41.999610 waagent[1893]: 2025-04-30T00:36:41.999568Z INFO ExtHandler Apr 30 00:36:41.999691 waagent[1893]: 2025-04-30T00:36:41.999657Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f3ab15f6-1306-4aff-ba16-2f3c4151cbef eTag: 10329681405067462521 source: Fabric] Apr 30 00:36:41.999974 waagent[1893]: 2025-04-30T00:36:41.999934Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 00:36:42.000566 waagent[1893]: 2025-04-30T00:36:42.000521Z INFO ExtHandler Apr 30 00:36:42.000632 waagent[1893]: 2025-04-30T00:36:42.000603Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 00:36:42.004445 waagent[1893]: 2025-04-30T00:36:42.004411Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 00:36:42.088962 waagent[1893]: 2025-04-30T00:36:42.088826Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CCECBF30C8923E75CEA946E2894DFA12E9AA3BCD', 'hasPrivateKey': True} Apr 30 00:36:42.089345 waagent[1893]: 2025-04-30T00:36:42.089301Z INFO ExtHandler Downloaded certificate {'thumbprint': '3238E2427C80AB9634C089CD731A635610FD8C98', 'hasPrivateKey': False} Apr 30 00:36:42.089749 waagent[1893]: 2025-04-30T00:36:42.089708Z INFO ExtHandler Fetch goal state completed Apr 30 00:36:42.105102 waagent[1893]: 2025-04-30T00:36:42.105051Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1893 Apr 30 00:36:42.105263 waagent[1893]: 2025-04-30T00:36:42.105225Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 00:36:42.106835 waagent[1893]: 2025-04-30T00:36:42.106791Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 00:36:42.107246 waagent[1893]: 2025-04-30T00:36:42.107204Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 00:36:42.138929 waagent[1893]: 2025-04-30T00:36:42.138883Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 00:36:42.139113 waagent[1893]: 2025-04-30T00:36:42.139072Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 00:36:42.145304 waagent[1893]: 2025-04-30T00:36:42.145261Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 00:36:42.151356 systemd[1]: Reloading requested from client PID 1908 ('systemctl') (unit waagent.service)... Apr 30 00:36:42.151566 systemd[1]: Reloading... Apr 30 00:36:42.229206 zram_generator::config[1945]: No configuration found. Apr 30 00:36:42.321873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:42.400128 systemd[1]: Reloading finished in 248 ms. Apr 30 00:36:42.429645 waagent[1893]: 2025-04-30T00:36:42.425796Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 00:36:42.431013 systemd[1]: Reloading requested from client PID 1996 ('systemctl') (unit waagent.service)... Apr 30 00:36:42.431025 systemd[1]: Reloading... Apr 30 00:36:42.498196 zram_generator::config[2030]: No configuration found. Apr 30 00:36:42.599466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:42.678399 systemd[1]: Reloading finished in 246 ms. Apr 30 00:36:42.709068 waagent[1893]: 2025-04-30T00:36:42.708408Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 00:36:42.709068 waagent[1893]: 2025-04-30T00:36:42.708575Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 00:36:43.035229 waagent[1893]: 2025-04-30T00:36:43.034387Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 00:36:43.035229 waagent[1893]: 2025-04-30T00:36:43.034973Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 00:36:43.035799 waagent[1893]: 2025-04-30T00:36:43.035743Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:36:43.035875 waagent[1893]: 2025-04-30T00:36:43.035843Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:36:43.036077 waagent[1893]: 2025-04-30T00:36:43.036039Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 00:36:43.036202 waagent[1893]: 2025-04-30T00:36:43.036129Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 00:36:43.036518 waagent[1893]: 2025-04-30T00:36:43.036296Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 00:36:43.036518 waagent[1893]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 00:36:43.036518 waagent[1893]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 00:36:43.036518 waagent[1893]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 00:36:43.036518 waagent[1893]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:36:43.036518 waagent[1893]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:36:43.036518 waagent[1893]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 00:36:43.036801 waagent[1893]: 2025-04-30T00:36:43.036700Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 00:36:43.036891 waagent[1893]: 2025-04-30T00:36:43.036839Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 00:36:43.037337 waagent[1893]: 2025-04-30T00:36:43.037274Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 00:36:43.037843 waagent[1893]: 2025-04-30T00:36:43.037765Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 00:36:43.037913 waagent[1893]: 2025-04-30T00:36:43.037838Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 00:36:43.038180 waagent[1893]: 2025-04-30T00:36:43.038008Z INFO EnvHandler ExtHandler Configure routes Apr 30 00:36:43.038366 waagent[1893]: 2025-04-30T00:36:43.038314Z INFO EnvHandler ExtHandler Gateway:None Apr 30 00:36:43.038432 waagent[1893]: 2025-04-30T00:36:43.038400Z INFO EnvHandler ExtHandler Routes:None Apr 30 00:36:43.038828 waagent[1893]: 2025-04-30T00:36:43.038755Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 00:36:43.038929 waagent[1893]: 2025-04-30T00:36:43.038818Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 00:36:43.039266 waagent[1893]: 2025-04-30T00:36:43.039183Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 00:36:43.045384 waagent[1893]: 2025-04-30T00:36:43.045328Z INFO ExtHandler ExtHandler Apr 30 00:36:43.045981 waagent[1893]: 2025-04-30T00:36:43.045908Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 55fa3f70-341a-451c-b6d2-56c7939c6dc6 correlation 3768bf41-4352-4b07-9d76-a995dfdd3d3b created: 2025-04-30T00:35:25.488258Z] Apr 30 00:36:43.046903 waagent[1893]: 2025-04-30T00:36:43.046859Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 00:36:43.048207 waagent[1893]: 2025-04-30T00:36:43.047542Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 30 00:36:43.084894 waagent[1893]: 2025-04-30T00:36:43.084834Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F646DE7B-1D8D-4D3F-A62B-62C8CF435CDA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 00:36:43.100595 waagent[1893]: 2025-04-30T00:36:43.100530Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 00:36:43.100595 waagent[1893]: Executing ['ip', '-a', '-o', 'link']: Apr 30 00:36:43.100595 waagent[1893]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 00:36:43.100595 waagent[1893]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:d9:9d brd ff:ff:ff:ff:ff:ff Apr 30 00:36:43.100595 waagent[1893]: 3: enP61771s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:d9:9d brd ff:ff:ff:ff:ff:ff\ altname enP61771p0s2 Apr 30 00:36:43.100595 waagent[1893]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 00:36:43.100595 waagent[1893]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 00:36:43.100595 waagent[1893]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 00:36:43.100595 waagent[1893]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 00:36:43.100595 waagent[1893]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 00:36:43.100595 waagent[1893]: 2: eth0 inet6 fe80::222:48ff:febe:d99d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:36:43.100595 waagent[1893]: 3: enP61771s1 inet6 fe80::222:48ff:febe:d99d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 00:36:43.142189 waagent[1893]: 2025-04-30T00:36:43.141348Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 00:36:43.142189 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.142189 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.142189 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.142189 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.142189 waagent[1893]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.142189 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.142189 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:36:43.142189 waagent[1893]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:36:43.142189 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:36:43.144556 waagent[1893]: 2025-04-30T00:36:43.144506Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 00:36:43.144556 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.144556 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.144556 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.144556 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.144556 waagent[1893]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 00:36:43.144556 waagent[1893]: pkts bytes target prot opt in out source destination Apr 30 00:36:43.144556 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 00:36:43.144556 waagent[1893]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 00:36:43.144556 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 00:36:43.144796 waagent[1893]: 2025-04-30T00:36:43.144759Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 00:36:47.447951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:36:47.456318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:47.543637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:47.547016 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:47.587425 kubelet[2124]: E0430 00:36:47.587385 2124 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:47.589915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:47.590035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:57.698143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:36:57.707414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:57.791756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:57.794975 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:57.849178 kubelet[2140]: E0430 00:36:57.849112 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:57.851744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:57.851995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:59.438999 chronyd[1666]: Selected source PHC0 Apr 30 00:37:07.948059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:37:07.953359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:08.248903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:08.252209 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:08.288334 kubelet[2155]: E0430 00:37:08.288268 2155 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:08.290802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:08.291042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:12.630056 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:37:12.631148 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:53734.service - OpenSSH per-connection server daemon (10.200.16.10:53734). Apr 30 00:37:13.167766 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 53734 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:13.169094 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:13.173766 systemd-logind[1684]: New session 3 of user core. Apr 30 00:37:13.180375 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:37:13.570387 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:53744.service - OpenSSH per-connection server daemon (10.200.16.10:53744). Apr 30 00:37:13.975067 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 53744 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:13.976372 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:13.980127 systemd-logind[1684]: New session 4 of user core. Apr 30 00:37:13.987355 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:37:14.276889 sshd[2169]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:14.280827 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:53744.service: Deactivated successfully. Apr 30 00:37:14.282366 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:37:14.283036 systemd-logind[1684]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:37:14.284042 systemd-logind[1684]: Removed session 4. Apr 30 00:37:14.357574 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:53754.service - OpenSSH per-connection server daemon (10.200.16.10:53754). Apr 30 00:37:14.802790 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 53754 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:14.804110 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:14.808032 systemd-logind[1684]: New session 5 of user core. Apr 30 00:37:14.813285 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:37:15.123322 sshd[2176]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:15.126919 systemd-logind[1684]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:37:15.127920 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:53754.service: Deactivated successfully. Apr 30 00:37:15.129547 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:37:15.130408 systemd-logind[1684]: Removed session 5. Apr 30 00:37:15.197507 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:53762.service - OpenSSH per-connection server daemon (10.200.16.10:53762). Apr 30 00:37:15.609620 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 53762 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:15.610873 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:15.615439 systemd-logind[1684]: New session 6 of user core. Apr 30 00:37:15.622380 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:37:15.909387 sshd[2183]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:15.912512 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:53762.service: Deactivated successfully. Apr 30 00:37:15.913914 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:37:15.915632 systemd-logind[1684]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:37:15.916562 systemd-logind[1684]: Removed session 6. Apr 30 00:37:15.988336 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:53776.service - OpenSSH per-connection server daemon (10.200.16.10:53776). Apr 30 00:37:16.403550 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 53776 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:16.404809 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:16.408409 systemd-logind[1684]: New session 7 of user core. Apr 30 00:37:16.416277 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:37:16.720379 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:37:16.720652 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:16.748366 sudo[2193]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:16.813262 sshd[2190]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:16.816875 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:53776.service: Deactivated successfully. Apr 30 00:37:16.818395 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:37:16.819037 systemd-logind[1684]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:37:16.819897 systemd-logind[1684]: Removed session 7. Apr 30 00:37:16.895635 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:53784.service - OpenSSH per-connection server daemon (10.200.16.10:53784). Apr 30 00:37:17.341229 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 53784 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:17.342535 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:17.346078 systemd-logind[1684]: New session 8 of user core. Apr 30 00:37:17.356353 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:37:17.596051 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:37:17.596345 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:17.599350 sudo[2202]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:17.603442 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:37:17.603677 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:17.616385 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:37:17.618170 auditctl[2205]: No rules Apr 30 00:37:17.618895 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:37:17.619081 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:37:17.620674 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:37:17.642092 augenrules[2223]: No rules Apr 30 00:37:17.643431 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:37:17.644632 sudo[2201]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:17.715070 sshd[2198]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:17.718319 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:53784.service: Deactivated successfully. Apr 30 00:37:17.719727 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:37:17.720372 systemd-logind[1684]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:37:17.721284 systemd-logind[1684]: Removed session 8. Apr 30 00:37:17.794297 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:53794.service - OpenSSH per-connection server daemon (10.200.16.10:53794). Apr 30 00:37:17.989090 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 30 00:37:18.204707 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 53794 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:37:18.207470 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:37:18.211071 systemd-logind[1684]: New session 9 of user core. Apr 30 00:37:18.216318 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:37:18.442279 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:37:18.442562 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:37:18.443433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:37:18.448341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:18.594829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:18.598410 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:18.631745 kubelet[2247]: E0430 00:37:18.631695 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:18.634319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:18.634451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:19.739615 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:37:19.739616 (dockerd)[2265]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:37:20.485378 dockerd[2265]: time="2025-04-30T00:37:20.485325010Z" level=info msg="Starting up" Apr 30 00:37:20.863895 update_engine[1689]: I20250430 00:37:20.863201 1689 update_attempter.cc:509] Updating boot flags... Apr 30 00:37:20.925193 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2297) Apr 30 00:37:20.959717 dockerd[2265]: time="2025-04-30T00:37:20.959401132Z" level=info msg="Loading containers: start." Apr 30 00:37:21.207177 kernel: Initializing XFRM netlink socket Apr 30 00:37:21.397030 systemd-networkd[1501]: docker0: Link UP Apr 30 00:37:21.427724 dockerd[2265]: time="2025-04-30T00:37:21.427197855Z" level=info msg="Loading containers: done." Apr 30 00:37:21.449996 dockerd[2265]: time="2025-04-30T00:37:21.449953857Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:37:21.450291 dockerd[2265]: time="2025-04-30T00:37:21.450273537Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:37:21.450469 dockerd[2265]: time="2025-04-30T00:37:21.450452457Z" level=info msg="Daemon has completed initialization" Apr 30 00:37:21.521198 dockerd[2265]: time="2025-04-30T00:37:21.521046903Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:37:21.523237 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:37:23.331260 containerd[1740]: time="2025-04-30T00:37:23.331165205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:37:24.340320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187472574.mount: Deactivated successfully. Apr 30 00:37:26.158196 containerd[1740]: time="2025-04-30T00:37:26.157866745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:26.160459 containerd[1740]: time="2025-04-30T00:37:26.160246826Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 00:37:26.164621 containerd[1740]: time="2025-04-30T00:37:26.164573028Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:26.168860 containerd[1740]: time="2025-04-30T00:37:26.168819469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:26.170034 containerd[1740]: time="2025-04-30T00:37:26.169859670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.838654105s" Apr 30 00:37:26.170034 containerd[1740]: time="2025-04-30T00:37:26.169892910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:37:26.189402 containerd[1740]: time="2025-04-30T00:37:26.189362118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:37:28.271513 containerd[1740]: time="2025-04-30T00:37:28.271451757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:28.274890 containerd[1740]: time="2025-04-30T00:37:28.274864798Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 00:37:28.279565 containerd[1740]: time="2025-04-30T00:37:28.279524120Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:28.286556 containerd[1740]: time="2025-04-30T00:37:28.286515843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:28.287600 containerd[1740]: time="2025-04-30T00:37:28.287475643Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.097846205s" Apr 30 00:37:28.287600 containerd[1740]: time="2025-04-30T00:37:28.287511323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:37:28.306644 containerd[1740]: time="2025-04-30T00:37:28.306469611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:37:28.697863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:37:28.708481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:28.794804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:28.798563 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:28.842405 kubelet[2521]: E0430 00:37:28.842338 2521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:28.844942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:28.845225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:30.158190 containerd[1740]: time="2025-04-30T00:37:30.158021718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:30.163249 containerd[1740]: time="2025-04-30T00:37:30.163107360Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 00:37:30.170896 containerd[1740]: time="2025-04-30T00:37:30.170860443Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:30.179038 containerd[1740]: time="2025-04-30T00:37:30.179000326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:30.180166 containerd[1740]: time="2025-04-30T00:37:30.180123967Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.873617076s" Apr 30 00:37:30.180267 containerd[1740]: time="2025-04-30T00:37:30.180250447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:37:30.198509 containerd[1740]: time="2025-04-30T00:37:30.198470374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:37:31.361649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267922138.mount: Deactivated successfully. Apr 30 00:37:32.150840 containerd[1740]: time="2025-04-30T00:37:32.150786260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:32.155021 containerd[1740]: time="2025-04-30T00:37:32.154967063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 00:37:32.158000 containerd[1740]: time="2025-04-30T00:37:32.157952464Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:32.163116 containerd[1740]: time="2025-04-30T00:37:32.163060027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:32.163859 containerd[1740]: time="2025-04-30T00:37:32.163739028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.965220974s" Apr 30 00:37:32.163859 containerd[1740]: time="2025-04-30T00:37:32.163768988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:37:32.183268 containerd[1740]: time="2025-04-30T00:37:32.183229118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:37:32.944121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623052684.mount: Deactivated successfully. Apr 30 00:37:34.112143 containerd[1740]: time="2025-04-30T00:37:34.112101303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.114613 containerd[1740]: time="2025-04-30T00:37:34.114582184Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 00:37:34.121562 containerd[1740]: time="2025-04-30T00:37:34.121520988Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.127261 containerd[1740]: time="2025-04-30T00:37:34.127218031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.128598 containerd[1740]: time="2025-04-30T00:37:34.128294672Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.945025314s" Apr 30 00:37:34.128598 containerd[1740]: time="2025-04-30T00:37:34.128328512Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:37:34.145928 containerd[1740]: time="2025-04-30T00:37:34.145891202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:37:34.768339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051321354.mount: Deactivated successfully. Apr 30 00:37:34.825977 containerd[1740]: time="2025-04-30T00:37:34.825181097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.829223 containerd[1740]: time="2025-04-30T00:37:34.829185259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 00:37:34.836417 containerd[1740]: time="2025-04-30T00:37:34.836373383Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.842656 containerd[1740]: time="2025-04-30T00:37:34.842615786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:34.843492 containerd[1740]: time="2025-04-30T00:37:34.843362307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 697.433945ms" Apr 30 00:37:34.843492 containerd[1740]: time="2025-04-30T00:37:34.843395987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:37:34.860908 containerd[1740]: time="2025-04-30T00:37:34.860692116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:37:35.592660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605948174.mount: Deactivated successfully. Apr 30 00:37:38.947950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:37:38.954367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:39.052636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:39.062513 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:39.100033 kubelet[2661]: E0430 00:37:39.099973 2661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:39.102656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:39.102799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:39.522502 containerd[1740]: time="2025-04-30T00:37:39.522426584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:39.528642 containerd[1740]: time="2025-04-30T00:37:39.528401146Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 00:37:39.531425 containerd[1740]: time="2025-04-30T00:37:39.531371907Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:39.538249 containerd[1740]: time="2025-04-30T00:37:39.538210910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:37:39.539700 containerd[1740]: time="2025-04-30T00:37:39.539551711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.678824515s" Apr 30 00:37:39.539700 containerd[1740]: time="2025-04-30T00:37:39.539586111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:37:45.290118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:45.301349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:45.322510 systemd[1]: Reloading requested from client PID 2733 ('systemctl') (unit session-9.scope)... Apr 30 00:37:45.322650 systemd[1]: Reloading... Apr 30 00:37:45.405194 zram_generator::config[2770]: No configuration found. Apr 30 00:37:45.514316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:45.593694 systemd[1]: Reloading finished in 270 ms. Apr 30 00:37:45.632175 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:37:45.632254 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:37:45.633232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:45.642400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:45.826756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:45.831380 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:37:45.871305 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:37:45.871305 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:37:45.871305 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:37:45.872203 kubelet[2840]: I0430 00:37:45.872149 2840 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:37:46.553058 kubelet[2840]: I0430 00:37:46.553022 2840 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:37:46.553058 kubelet[2840]: I0430 00:37:46.553049 2840 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:37:46.553271 kubelet[2840]: I0430 00:37:46.553254 2840 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:37:46.568184 kubelet[2840]: E0430 00:37:46.567881 2840 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.569046 kubelet[2840]: I0430 00:37:46.569014 2840 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:37:46.577521 kubelet[2840]: I0430 00:37:46.577495 2840 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:37:46.578571 kubelet[2840]: I0430 00:37:46.578528 2840 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:37:46.578729 kubelet[2840]: I0430 00:37:46.578569 2840 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-8ba35441fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:37:46.578835 kubelet[2840]: I0430 00:37:46.578740 2840 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:37:46.578835 kubelet[2840]: I0430 00:37:46.578748 2840 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:37:46.578882 kubelet[2840]: I0430 00:37:46.578858 2840 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:37:46.579568 kubelet[2840]: I0430 00:37:46.579553 2840 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:37:46.579614 kubelet[2840]: I0430 00:37:46.579573 2840 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:37:46.579614 kubelet[2840]: I0430 00:37:46.579602 2840 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:37:46.580572 kubelet[2840]: I0430 00:37:46.579618 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:37:46.580821 kubelet[2840]: W0430 00:37:46.580753 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-8ba35441fd&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.580821 kubelet[2840]: E0430 00:37:46.580799 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-8ba35441fd&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.581001 kubelet[2840]: I0430 00:37:46.580966 2840 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:37:46.581264 kubelet[2840]: I0430 00:37:46.581253 2840 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:37:46.581838 kubelet[2840]: W0430 00:37:46.581363 2840 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:37:46.582414 kubelet[2840]: I0430 00:37:46.582398 2840 server.go:1264] "Started kubelet" Apr 30 00:37:46.582602 kubelet[2840]: W0430 00:37:46.582572 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.582676 kubelet[2840]: E0430 00:37:46.582666 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.585121 kubelet[2840]: E0430 00:37:46.585020 2840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-8ba35441fd.183af19e06e64290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-8ba35441fd,UID:ci-4081.3.3-a-8ba35441fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-8ba35441fd,},FirstTimestamp:2025-04-30 00:37:46.582368912 +0000 UTC m=+0.748064267,LastTimestamp:2025-04-30 00:37:46.582368912 +0000 UTC m=+0.748064267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-8ba35441fd,}" Apr 30 00:37:46.585257 kubelet[2840]: I0430 00:37:46.585221 2840 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:37:46.586000 kubelet[2840]: I0430 00:37:46.585969 2840 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:37:46.586285 kubelet[2840]: I0430 00:37:46.586243 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:37:46.586959 kubelet[2840]: I0430 00:37:46.586929 2840 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:37:46.588664 kubelet[2840]: I0430 00:37:46.588536 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:37:46.590875 kubelet[2840]: E0430 00:37:46.590640 2840 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:37:46.591513 kubelet[2840]: E0430 00:37:46.591378 2840 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-8ba35441fd\" not found" Apr 30 00:37:46.591513 kubelet[2840]: I0430 00:37:46.591436 2840 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:37:46.591646 kubelet[2840]: I0430 00:37:46.591585 2840 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:37:46.592532 kubelet[2840]: I0430 00:37:46.592484 2840 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:37:46.593356 kubelet[2840]: W0430 00:37:46.592929 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.593356 kubelet[2840]: E0430 00:37:46.592972 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.593356 kubelet[2840]: E0430 00:37:46.593203 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-8ba35441fd?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Apr 30 00:37:46.593726 kubelet[2840]: I0430 00:37:46.593699 2840 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:37:46.593808 kubelet[2840]: I0430 00:37:46.593786 2840 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:37:46.594792 kubelet[2840]: I0430 00:37:46.594765 2840 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:37:46.626393 kubelet[2840]: I0430 00:37:46.626364 2840 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:37:46.626393 kubelet[2840]: I0430 00:37:46.626383 2840 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:37:46.626393 kubelet[2840]: I0430 00:37:46.626400 2840 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:37:46.631139 kubelet[2840]: I0430 00:37:46.631112 2840 policy_none.go:49] "None policy: Start" Apr 30 00:37:46.631912 kubelet[2840]: I0430 00:37:46.631849 2840 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:37:46.631912 kubelet[2840]: I0430 00:37:46.631878 2840 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:37:46.641587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:37:46.650045 kubelet[2840]: I0430 00:37:46.649998 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:37:46.652596 kubelet[2840]: I0430 00:37:46.652289 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:37:46.652596 kubelet[2840]: I0430 00:37:46.652322 2840 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:37:46.652596 kubelet[2840]: I0430 00:37:46.652338 2840 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:37:46.652596 kubelet[2840]: E0430 00:37:46.652377 2840 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:37:46.652484 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:37:46.655224 kubelet[2840]: W0430 00:37:46.655133 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.655224 kubelet[2840]: E0430 00:37:46.655197 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:46.658674 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:37:46.665945 kubelet[2840]: I0430 00:37:46.665927 2840 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:37:46.667862 kubelet[2840]: I0430 00:37:46.667655 2840 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:37:46.667862 kubelet[2840]: I0430 00:37:46.667754 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:37:46.669898 kubelet[2840]: E0430 00:37:46.669827 2840 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-8ba35441fd\" not found" Apr 30 00:37:46.693823 kubelet[2840]: I0430 00:37:46.693768 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.694277 kubelet[2840]: E0430 00:37:46.694239 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.753304 kubelet[2840]: I0430 00:37:46.753266 2840 topology_manager.go:215] "Topology Admit Handler" podUID="041fd46a78e9f2a74dd0c69e592c3204" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.754743 kubelet[2840]: I0430 00:37:46.754714 2840 topology_manager.go:215] "Topology Admit Handler" podUID="b99237b7a9d3ef9476e9d103bd156d39" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.756080 kubelet[2840]: I0430 00:37:46.755966 2840 topology_manager.go:215] "Topology Admit Handler" podUID="51fef09fe9079effa6b86d22d906e83e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.762883 systemd[1]: Created slice kubepods-burstable-pod041fd46a78e9f2a74dd0c69e592c3204.slice - libcontainer container kubepods-burstable-pod041fd46a78e9f2a74dd0c69e592c3204.slice. Apr 30 00:37:46.777058 systemd[1]: Created slice kubepods-burstable-podb99237b7a9d3ef9476e9d103bd156d39.slice - libcontainer container kubepods-burstable-podb99237b7a9d3ef9476e9d103bd156d39.slice. Apr 30 00:37:46.781216 systemd[1]: Created slice kubepods-burstable-pod51fef09fe9079effa6b86d22d906e83e.slice - libcontainer container kubepods-burstable-pod51fef09fe9079effa6b86d22d906e83e.slice. Apr 30 00:37:46.794182 kubelet[2840]: I0430 00:37:46.793941 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794182 kubelet[2840]: I0430 00:37:46.793974 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794182 kubelet[2840]: I0430 00:37:46.793994 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794182 kubelet[2840]: I0430 00:37:46.794008 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fef09fe9079effa6b86d22d906e83e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-8ba35441fd\" (UID: \"51fef09fe9079effa6b86d22d906e83e\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794182 kubelet[2840]: I0430 00:37:46.794026 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794395 kubelet[2840]: I0430 00:37:46.794041 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794395 kubelet[2840]: I0430 00:37:46.794059 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794395 kubelet[2840]: I0430 00:37:46.794073 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794395 kubelet[2840]: I0430 00:37:46.794089 2840 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.794395 kubelet[2840]: E0430 00:37:46.794184 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-8ba35441fd?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Apr 30 00:37:46.896563 kubelet[2840]: I0430 00:37:46.896386 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:46.897410 kubelet[2840]: E0430 00:37:46.896874 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:47.075451 containerd[1740]: time="2025-04-30T00:37:47.075301794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-8ba35441fd,Uid:041fd46a78e9f2a74dd0c69e592c3204,Namespace:kube-system,Attempt:0,}" Apr 30 00:37:47.080816 containerd[1740]: time="2025-04-30T00:37:47.080524316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-8ba35441fd,Uid:b99237b7a9d3ef9476e9d103bd156d39,Namespace:kube-system,Attempt:0,}" Apr 30 00:37:47.083310 containerd[1740]: time="2025-04-30T00:37:47.083240517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-8ba35441fd,Uid:51fef09fe9079effa6b86d22d906e83e,Namespace:kube-system,Attempt:0,}" Apr 30 00:37:47.195479 kubelet[2840]: E0430 00:37:47.195372 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-8ba35441fd?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Apr 30 00:37:47.299044 kubelet[2840]: I0430 00:37:47.298756 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:47.299166 kubelet[2840]: E0430 00:37:47.299069 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:47.747582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005399857.mount: Deactivated successfully. Apr 30 00:37:47.780620 kubelet[2840]: W0430 00:37:47.780559 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:47.780620 kubelet[2840]: E0430 00:37:47.780623 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:47.995920 kubelet[2840]: E0430 00:37:47.995862 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-8ba35441fd?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Apr 30 00:37:48.095992 kubelet[2840]: W0430 00:37:48.095861 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-8ba35441fd&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.095992 kubelet[2840]: E0430 00:37:48.095925 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-8ba35441fd&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.101387 kubelet[2840]: I0430 00:37:48.101356 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:48.101652 kubelet[2840]: E0430 00:37:48.101629 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:48.164196 kubelet[2840]: W0430 00:37:48.164114 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.164196 kubelet[2840]: E0430 00:37:48.164201 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.225790 kubelet[2840]: W0430 00:37:48.225757 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.225790 kubelet[2840]: E0430 00:37:48.225795 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:48.512192 containerd[1740]: time="2025-04-30T00:37:48.512046334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:37:48.515347 containerd[1740]: time="2025-04-30T00:37:48.515310615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:37:48.526410 containerd[1740]: time="2025-04-30T00:37:48.526371299Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:37:48.533968 containerd[1740]: time="2025-04-30T00:37:48.533269902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:37:48.540964 containerd[1740]: time="2025-04-30T00:37:48.540904105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:37:48.545475 containerd[1740]: time="2025-04-30T00:37:48.545435027Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:37:48.549864 containerd[1740]: time="2025-04-30T00:37:48.549806589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:37:48.555215 containerd[1740]: time="2025-04-30T00:37:48.555161791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:37:48.556289 containerd[1740]: time="2025-04-30T00:37:48.555847591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.475264555s" Apr 30 00:37:48.556885 containerd[1740]: time="2025-04-30T00:37:48.556844032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.481467078s" Apr 30 00:37:48.569653 containerd[1740]: time="2025-04-30T00:37:48.569497597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.48619192s" Apr 30 00:37:48.574812 kubelet[2840]: E0430 00:37:48.574789 2840 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:49.328944 containerd[1740]: time="2025-04-30T00:37:49.328707221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:49.328944 containerd[1740]: time="2025-04-30T00:37:49.328787541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:49.328944 containerd[1740]: time="2025-04-30T00:37:49.328823461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.329484 containerd[1740]: time="2025-04-30T00:37:49.329353901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.331074 containerd[1740]: time="2025-04-30T00:37:49.330745262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:49.331074 containerd[1740]: time="2025-04-30T00:37:49.330793102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:49.331074 containerd[1740]: time="2025-04-30T00:37:49.330808542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.331074 containerd[1740]: time="2025-04-30T00:37:49.330876942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.344762 containerd[1740]: time="2025-04-30T00:37:49.344646467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:37:49.345310 containerd[1740]: time="2025-04-30T00:37:49.345274588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:37:49.346085 containerd[1740]: time="2025-04-30T00:37:49.345860028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.346085 containerd[1740]: time="2025-04-30T00:37:49.345977028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:37:49.363348 systemd[1]: Started cri-containerd-f78c5c726e894d4d84cd6fb026d86907de46db47a08165bc2049ec54aff9c334.scope - libcontainer container f78c5c726e894d4d84cd6fb026d86907de46db47a08165bc2049ec54aff9c334. Apr 30 00:37:49.367681 systemd[1]: Started cri-containerd-316b2daf5fe2007d12a54d86b3b2a94d864178b032cb7d8c6108f7f0dd52893a.scope - libcontainer container 316b2daf5fe2007d12a54d86b3b2a94d864178b032cb7d8c6108f7f0dd52893a. Apr 30 00:37:49.371632 systemd[1]: Started cri-containerd-c6e0c83bfef9566bfcd8930d0575282fa9a80de37f66310841db8524c1d1c8c5.scope - libcontainer container c6e0c83bfef9566bfcd8930d0575282fa9a80de37f66310841db8524c1d1c8c5. Apr 30 00:37:49.408408 containerd[1740]: time="2025-04-30T00:37:49.408214693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-8ba35441fd,Uid:b99237b7a9d3ef9476e9d103bd156d39,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78c5c726e894d4d84cd6fb026d86907de46db47a08165bc2049ec54aff9c334\"" Apr 30 00:37:49.419038 containerd[1740]: time="2025-04-30T00:37:49.418909617Z" level=info msg="CreateContainer within sandbox \"f78c5c726e894d4d84cd6fb026d86907de46db47a08165bc2049ec54aff9c334\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:37:49.423880 containerd[1740]: time="2025-04-30T00:37:49.423853299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-8ba35441fd,Uid:51fef09fe9079effa6b86d22d906e83e,Namespace:kube-system,Attempt:0,} returns sandbox id \"316b2daf5fe2007d12a54d86b3b2a94d864178b032cb7d8c6108f7f0dd52893a\"" Apr 30 00:37:49.428429 containerd[1740]: time="2025-04-30T00:37:49.428301501Z" level=info msg="CreateContainer within sandbox \"316b2daf5fe2007d12a54d86b3b2a94d864178b032cb7d8c6108f7f0dd52893a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:37:49.428789 containerd[1740]: time="2025-04-30T00:37:49.428749661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-8ba35441fd,Uid:041fd46a78e9f2a74dd0c69e592c3204,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6e0c83bfef9566bfcd8930d0575282fa9a80de37f66310841db8524c1d1c8c5\"" Apr 30 00:37:49.432376 containerd[1740]: time="2025-04-30T00:37:49.432346463Z" level=info msg="CreateContainer within sandbox \"c6e0c83bfef9566bfcd8930d0575282fa9a80de37f66310841db8524c1d1c8c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:37:49.582358 containerd[1740]: time="2025-04-30T00:37:49.581644522Z" level=info msg="CreateContainer within sandbox \"f78c5c726e894d4d84cd6fb026d86907de46db47a08165bc2049ec54aff9c334\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a61dad2cb3ea208810bca7bb7241f4b8dcb196054ba9c36130b3e62d8674270\"" Apr 30 00:37:49.582838 containerd[1740]: time="2025-04-30T00:37:49.582805683Z" level=info msg="StartContainer for \"7a61dad2cb3ea208810bca7bb7241f4b8dcb196054ba9c36130b3e62d8674270\"" Apr 30 00:37:49.597096 kubelet[2840]: E0430 00:37:49.596960 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-8ba35441fd?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="3.2s" Apr 30 00:37:49.604501 containerd[1740]: time="2025-04-30T00:37:49.603872131Z" level=info msg="CreateContainer within sandbox \"316b2daf5fe2007d12a54d86b3b2a94d864178b032cb7d8c6108f7f0dd52893a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83ad27c55f93eab5ed3f4317342edd8ce2ee4b610b1c08fbe889f17f2087188a\"" Apr 30 00:37:49.604817 containerd[1740]: time="2025-04-30T00:37:49.604658092Z" level=info msg="StartContainer for \"83ad27c55f93eab5ed3f4317342edd8ce2ee4b610b1c08fbe889f17f2087188a\"" Apr 30 00:37:49.610017 containerd[1740]: time="2025-04-30T00:37:49.609784974Z" level=info msg="CreateContainer within sandbox \"c6e0c83bfef9566bfcd8930d0575282fa9a80de37f66310841db8524c1d1c8c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3ec1dca99e5b4b209bd5ea0f75347c2d0317d39576240a4c539cb433efb18ff\"" Apr 30 00:37:49.610992 containerd[1740]: time="2025-04-30T00:37:49.610837694Z" level=info msg="StartContainer for \"a3ec1dca99e5b4b209bd5ea0f75347c2d0317d39576240a4c539cb433efb18ff\"" Apr 30 00:37:49.613399 systemd[1]: Started cri-containerd-7a61dad2cb3ea208810bca7bb7241f4b8dcb196054ba9c36130b3e62d8674270.scope - libcontainer container 7a61dad2cb3ea208810bca7bb7241f4b8dcb196054ba9c36130b3e62d8674270. Apr 30 00:37:49.636319 systemd[1]: Started cri-containerd-83ad27c55f93eab5ed3f4317342edd8ce2ee4b610b1c08fbe889f17f2087188a.scope - libcontainer container 83ad27c55f93eab5ed3f4317342edd8ce2ee4b610b1c08fbe889f17f2087188a. Apr 30 00:37:49.640641 systemd[1]: Started cri-containerd-a3ec1dca99e5b4b209bd5ea0f75347c2d0317d39576240a4c539cb433efb18ff.scope - libcontainer container a3ec1dca99e5b4b209bd5ea0f75347c2d0317d39576240a4c539cb433efb18ff. Apr 30 00:37:49.666414 containerd[1740]: time="2025-04-30T00:37:49.666371676Z" level=info msg="StartContainer for \"7a61dad2cb3ea208810bca7bb7241f4b8dcb196054ba9c36130b3e62d8674270\" returns successfully" Apr 30 00:37:49.698009 containerd[1740]: time="2025-04-30T00:37:49.697961089Z" level=info msg="StartContainer for \"a3ec1dca99e5b4b209bd5ea0f75347c2d0317d39576240a4c539cb433efb18ff\" returns successfully" Apr 30 00:37:49.705215 kubelet[2840]: I0430 00:37:49.704266 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:49.705215 kubelet[2840]: E0430 00:37:49.704596 2840 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:49.706940 containerd[1740]: time="2025-04-30T00:37:49.706894333Z" level=info msg="StartContainer for \"83ad27c55f93eab5ed3f4317342edd8ce2ee4b610b1c08fbe889f17f2087188a\" returns successfully" Apr 30 00:37:49.724668 kubelet[2840]: W0430 00:37:49.724619 2840 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:49.724668 kubelet[2840]: E0430 00:37:49.724677 2840 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Apr 30 00:37:51.582795 kubelet[2840]: I0430 00:37:51.582754 2840 apiserver.go:52] "Watching apiserver" Apr 30 00:37:51.691799 kubelet[2840]: I0430 00:37:51.691727 2840 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:37:51.985554 kubelet[2840]: E0430 00:37:51.985523 2840 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.3-a-8ba35441fd" not found Apr 30 00:37:52.342381 kubelet[2840]: E0430 00:37:52.342262 2840 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.3-a-8ba35441fd" not found Apr 30 00:37:52.793705 kubelet[2840]: E0430 00:37:52.793669 2840 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.3-a-8ba35441fd" not found Apr 30 00:37:52.803501 kubelet[2840]: E0430 00:37:52.803466 2840 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-8ba35441fd\" not found" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:52.908889 kubelet[2840]: I0430 00:37:52.908816 2840 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:52.922632 kubelet[2840]: I0430 00:37:52.922541 2840 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:53.302371 kubelet[2840]: W0430 00:37:53.302075 2840 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:37:53.762724 systemd[1]: Reloading requested from client PID 3113 ('systemctl') (unit session-9.scope)... Apr 30 00:37:53.762739 systemd[1]: Reloading... Apr 30 00:37:53.840193 zram_generator::config[3149]: No configuration found. Apr 30 00:37:53.949763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:37:54.042412 systemd[1]: Reloading finished in 279 ms. Apr 30 00:37:54.084227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:54.098256 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:37:54.098500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:54.098569 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 112.3M memory peak, 0B memory swap peak. Apr 30 00:37:54.103422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:54.189759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:54.195492 (kubelet)[3217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:37:54.237598 kubelet[3217]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:37:54.237598 kubelet[3217]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:37:54.237598 kubelet[3217]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:37:54.237598 kubelet[3217]: I0430 00:37:54.237300 3217 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:37:54.246540 kubelet[3217]: I0430 00:37:54.246459 3217 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:37:54.246742 kubelet[3217]: I0430 00:37:54.246713 3217 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:37:54.247113 kubelet[3217]: I0430 00:37:54.247099 3217 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:37:54.248631 kubelet[3217]: I0430 00:37:54.248613 3217 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:37:54.250046 kubelet[3217]: I0430 00:37:54.249937 3217 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:37:54.255814 kubelet[3217]: I0430 00:37:54.255765 3217 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:37:54.256056 kubelet[3217]: I0430 00:37:54.256000 3217 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:37:54.256201 kubelet[3217]: I0430 00:37:54.256030 3217 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-8ba35441fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:37:54.256284 kubelet[3217]: I0430 00:37:54.256203 3217 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:37:54.256284 kubelet[3217]: I0430 00:37:54.256212 3217 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:37:54.256284 kubelet[3217]: I0430 00:37:54.256244 3217 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:37:54.256603 kubelet[3217]: I0430 00:37:54.256443 3217 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:37:54.256603 kubelet[3217]: I0430 00:37:54.256465 3217 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:37:54.256603 kubelet[3217]: I0430 00:37:54.256528 3217 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:37:54.256603 kubelet[3217]: I0430 00:37:54.256540 3217 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:37:54.258519 kubelet[3217]: I0430 00:37:54.258425 3217 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:37:54.258585 kubelet[3217]: I0430 00:37:54.258564 3217 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:37:54.258913 kubelet[3217]: I0430 00:37:54.258894 3217 server.go:1264] "Started kubelet" Apr 30 00:37:54.262785 kubelet[3217]: I0430 00:37:54.262765 3217 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:37:54.276071 kubelet[3217]: I0430 00:37:54.276042 3217 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:37:54.279762 kubelet[3217]: I0430 00:37:54.279393 3217 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:37:54.283416 kubelet[3217]: I0430 00:37:54.282950 3217 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:37:54.283577 kubelet[3217]: I0430 00:37:54.277399 3217 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:37:54.287581 kubelet[3217]: I0430 00:37:54.277387 3217 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:37:54.291241 kubelet[3217]: I0430 00:37:54.276169 3217 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:37:54.291678 kubelet[3217]: I0430 00:37:54.291644 3217 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:37:54.291678 kubelet[3217]: I0430 00:37:54.291456 3217 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:37:54.291761 kubelet[3217]: I0430 00:37:54.286719 3217 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:37:54.291790 kubelet[3217]: I0430 00:37:54.291768 3217 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:37:54.303511 kubelet[3217]: I0430 00:37:54.301968 3217 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:37:54.306864 kubelet[3217]: I0430 00:37:54.306826 3217 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:37:54.306864 kubelet[3217]: I0430 00:37:54.306867 3217 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:37:54.306963 kubelet[3217]: I0430 00:37:54.306881 3217 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:37:54.306963 kubelet[3217]: E0430 00:37:54.306922 3217 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:37:54.330636 kubelet[3217]: E0430 00:37:54.330598 3217 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:37:54.356543 kubelet[3217]: I0430 00:37:54.356519 3217 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:37:54.356832 kubelet[3217]: I0430 00:37:54.356677 3217 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:37:54.356832 kubelet[3217]: I0430 00:37:54.356699 3217 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:37:54.356994 kubelet[3217]: I0430 00:37:54.356955 3217 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:37:54.357104 kubelet[3217]: I0430 00:37:54.356970 3217 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:37:54.357104 kubelet[3217]: I0430 00:37:54.357052 3217 policy_none.go:49] "None policy: Start" Apr 30 00:37:54.357941 kubelet[3217]: I0430 00:37:54.357880 3217 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:37:54.357941 kubelet[3217]: I0430 00:37:54.357903 3217 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:37:54.358282 kubelet[3217]: I0430 00:37:54.358269 3217 state_mem.go:75] "Updated machine memory state" Apr 30 00:37:54.365219 kubelet[3217]: I0430 00:37:54.365188 3217 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:37:54.365382 kubelet[3217]: I0430 00:37:54.365343 3217 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:37:54.365457 kubelet[3217]: I0430 00:37:54.365440 3217 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:37:54.383354 kubelet[3217]: I0430 00:37:54.383308 3217 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.397517 kubelet[3217]: I0430 00:37:54.397469 3217 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.397632 kubelet[3217]: I0430 00:37:54.397611 3217 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.407453 kubelet[3217]: I0430 00:37:54.407294 3217 topology_manager.go:215] "Topology Admit Handler" podUID="041fd46a78e9f2a74dd0c69e592c3204" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.407453 kubelet[3217]: I0430 00:37:54.407394 3217 topology_manager.go:215] "Topology Admit Handler" podUID="b99237b7a9d3ef9476e9d103bd156d39" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.407453 kubelet[3217]: I0430 00:37:54.407430 3217 topology_manager.go:215] "Topology Admit Handler" podUID="51fef09fe9079effa6b86d22d906e83e" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.413232 kubelet[3217]: W0430 00:37:54.413002 3217 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:37:54.426619 kubelet[3217]: W0430 00:37:54.426379 3217 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:37:54.426997 kubelet[3217]: W0430 00:37:54.426635 3217 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:37:54.426997 kubelet[3217]: E0430 00:37:54.426930 3217 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493068 kubelet[3217]: I0430 00:37:54.493031 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fef09fe9079effa6b86d22d906e83e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-8ba35441fd\" (UID: \"51fef09fe9079effa6b86d22d906e83e\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493228 kubelet[3217]: I0430 00:37:54.493090 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493228 kubelet[3217]: I0430 00:37:54.493113 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493228 kubelet[3217]: I0430 00:37:54.493130 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493228 kubelet[3217]: I0430 00:37:54.493148 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493228 kubelet[3217]: I0430 00:37:54.493185 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493358 kubelet[3217]: I0430 00:37:54.493202 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493358 kubelet[3217]: I0430 00:37:54.493218 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/041fd46a78e9f2a74dd0c69e592c3204-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-8ba35441fd\" (UID: \"041fd46a78e9f2a74dd0c69e592c3204\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:54.493358 kubelet[3217]: I0430 00:37:54.493233 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b99237b7a9d3ef9476e9d103bd156d39-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-8ba35441fd\" (UID: \"b99237b7a9d3ef9476e9d103bd156d39\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" Apr 30 00:37:55.266492 kubelet[3217]: I0430 00:37:55.266209 3217 apiserver.go:52] "Watching apiserver" Apr 30 00:37:55.283937 kubelet[3217]: I0430 00:37:55.283885 3217 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:37:55.385884 kubelet[3217]: I0430 00:37:55.385758 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-8ba35441fd" podStartSLOduration=1.38574345 podStartE2EDuration="1.38574345s" podCreationTimestamp="2025-04-30 00:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:55.370181404 +0000 UTC m=+1.171236831" watchObservedRunningTime="2025-04-30 00:37:55.38574345 +0000 UTC m=+1.186798917" Apr 30 00:37:55.402104 kubelet[3217]: I0430 00:37:55.401875 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-8ba35441fd" podStartSLOduration=1.401858616 podStartE2EDuration="1.401858616s" podCreationTimestamp="2025-04-30 00:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:55.401347816 +0000 UTC m=+1.202403243" watchObservedRunningTime="2025-04-30 00:37:55.401858616 +0000 UTC m=+1.202914083" Apr 30 00:37:55.402104 kubelet[3217]: I0430 00:37:55.402004 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-8ba35441fd" podStartSLOduration=2.401998376 podStartE2EDuration="2.401998376s" podCreationTimestamp="2025-04-30 00:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:37:55.38621785 +0000 UTC m=+1.187273317" watchObservedRunningTime="2025-04-30 00:37:55.401998376 +0000 UTC m=+1.203053843" Apr 30 00:37:59.371597 sudo[2234]: pam_unix(sudo:session): session closed for user root Apr 30 00:37:59.436763 sshd[2231]: pam_unix(sshd:session): session closed for user core Apr 30 00:37:59.441047 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:53794.service: Deactivated successfully. Apr 30 00:37:59.443741 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:37:59.444845 systemd[1]: session-9.scope: Consumed 7.129s CPU time, 188.5M memory peak, 0B memory swap peak. Apr 30 00:37:59.445518 systemd-logind[1684]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:37:59.446632 systemd-logind[1684]: Removed session 9. Apr 30 00:38:08.810527 kubelet[3217]: I0430 00:38:08.810500 3217 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:38:08.811182 containerd[1740]: time="2025-04-30T00:38:08.811089807Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:38:08.811438 kubelet[3217]: I0430 00:38:08.811348 3217 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:38:09.424087 kubelet[3217]: I0430 00:38:09.422127 3217 topology_manager.go:215] "Topology Admit Handler" podUID="3d8e5ce5-a5e0-4e3f-8793-43616b5fd336" podNamespace="kube-system" podName="kube-proxy-8wrdh" Apr 30 00:38:09.433600 systemd[1]: Created slice kubepods-besteffort-pod3d8e5ce5_a5e0_4e3f_8793_43616b5fd336.slice - libcontainer container kubepods-besteffort-pod3d8e5ce5_a5e0_4e3f_8793_43616b5fd336.slice. Apr 30 00:38:09.485804 kubelet[3217]: I0430 00:38:09.485768 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d8e5ce5-a5e0-4e3f-8793-43616b5fd336-xtables-lock\") pod \"kube-proxy-8wrdh\" (UID: \"3d8e5ce5-a5e0-4e3f-8793-43616b5fd336\") " pod="kube-system/kube-proxy-8wrdh" Apr 30 00:38:09.485804 kubelet[3217]: I0430 00:38:09.485810 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d8e5ce5-a5e0-4e3f-8793-43616b5fd336-lib-modules\") pod \"kube-proxy-8wrdh\" (UID: \"3d8e5ce5-a5e0-4e3f-8793-43616b5fd336\") " pod="kube-system/kube-proxy-8wrdh" Apr 30 00:38:09.486029 kubelet[3217]: I0430 00:38:09.485832 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmmr\" (UniqueName: \"kubernetes.io/projected/3d8e5ce5-a5e0-4e3f-8793-43616b5fd336-kube-api-access-4rmmr\") pod \"kube-proxy-8wrdh\" (UID: \"3d8e5ce5-a5e0-4e3f-8793-43616b5fd336\") " pod="kube-system/kube-proxy-8wrdh" Apr 30 00:38:09.486029 kubelet[3217]: I0430 00:38:09.485857 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d8e5ce5-a5e0-4e3f-8793-43616b5fd336-kube-proxy\") pod \"kube-proxy-8wrdh\" (UID: \"3d8e5ce5-a5e0-4e3f-8793-43616b5fd336\") " pod="kube-system/kube-proxy-8wrdh" Apr 30 00:38:09.744350 containerd[1740]: time="2025-04-30T00:38:09.744289572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wrdh,Uid:3d8e5ce5-a5e0-4e3f-8793-43616b5fd336,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:09.798529 containerd[1740]: time="2025-04-30T00:38:09.798370713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:09.798529 containerd[1740]: time="2025-04-30T00:38:09.798432033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:09.798529 containerd[1740]: time="2025-04-30T00:38:09.798454353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:09.798843 containerd[1740]: time="2025-04-30T00:38:09.798536193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:09.825400 systemd[1]: Started cri-containerd-18117107b69574b8eba23690bff8b22a3f724ecd2cc435b952dd204708caf6a8.scope - libcontainer container 18117107b69574b8eba23690bff8b22a3f724ecd2cc435b952dd204708caf6a8. Apr 30 00:38:09.855752 kubelet[3217]: I0430 00:38:09.855111 3217 topology_manager.go:215] "Topology Admit Handler" podUID="7d613102-e717-4dee-9ad1-da3a309b388a" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-2dn2z" Apr 30 00:38:09.861877 containerd[1740]: time="2025-04-30T00:38:09.859747217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wrdh,Uid:3d8e5ce5-a5e0-4e3f-8793-43616b5fd336,Namespace:kube-system,Attempt:0,} returns sandbox id \"18117107b69574b8eba23690bff8b22a3f724ecd2cc435b952dd204708caf6a8\"" Apr 30 00:38:09.866049 systemd[1]: Created slice kubepods-besteffort-pod7d613102_e717_4dee_9ad1_da3a309b388a.slice - libcontainer container kubepods-besteffort-pod7d613102_e717_4dee_9ad1_da3a309b388a.slice. Apr 30 00:38:09.868190 containerd[1740]: time="2025-04-30T00:38:09.867811900Z" level=info msg="CreateContainer within sandbox \"18117107b69574b8eba23690bff8b22a3f724ecd2cc435b952dd204708caf6a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:38:09.887517 kubelet[3217]: I0430 00:38:09.887418 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzrq\" (UniqueName: \"kubernetes.io/projected/7d613102-e717-4dee-9ad1-da3a309b388a-kube-api-access-khzrq\") pod \"tigera-operator-797db67f8-2dn2z\" (UID: \"7d613102-e717-4dee-9ad1-da3a309b388a\") " pod="tigera-operator/tigera-operator-797db67f8-2dn2z" Apr 30 00:38:09.887517 kubelet[3217]: I0430 00:38:09.887459 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d613102-e717-4dee-9ad1-da3a309b388a-var-lib-calico\") pod \"tigera-operator-797db67f8-2dn2z\" (UID: \"7d613102-e717-4dee-9ad1-da3a309b388a\") " pod="tigera-operator/tigera-operator-797db67f8-2dn2z" Apr 30 00:38:09.928201 containerd[1740]: time="2025-04-30T00:38:09.928088004Z" level=info msg="CreateContainer within sandbox \"18117107b69574b8eba23690bff8b22a3f724ecd2cc435b952dd204708caf6a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf0cb29f8a7f421397b210a70d488bdbbd89a7f5c22143a29794578e346cb437\"" Apr 30 00:38:09.930105 containerd[1740]: time="2025-04-30T00:38:09.928770524Z" level=info msg="StartContainer for \"cf0cb29f8a7f421397b210a70d488bdbbd89a7f5c22143a29794578e346cb437\"" Apr 30 00:38:09.954314 systemd[1]: Started cri-containerd-cf0cb29f8a7f421397b210a70d488bdbbd89a7f5c22143a29794578e346cb437.scope - libcontainer container cf0cb29f8a7f421397b210a70d488bdbbd89a7f5c22143a29794578e346cb437. Apr 30 00:38:09.981508 containerd[1740]: time="2025-04-30T00:38:09.981444425Z" level=info msg="StartContainer for \"cf0cb29f8a7f421397b210a70d488bdbbd89a7f5c22143a29794578e346cb437\" returns successfully" Apr 30 00:38:10.174105 containerd[1740]: time="2025-04-30T00:38:10.174000580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2dn2z,Uid:7d613102-e717-4dee-9ad1-da3a309b388a,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:38:10.263228 containerd[1740]: time="2025-04-30T00:38:10.263103375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:10.263431 containerd[1740]: time="2025-04-30T00:38:10.263340255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:10.263431 containerd[1740]: time="2025-04-30T00:38:10.263383255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:10.263683 containerd[1740]: time="2025-04-30T00:38:10.263610495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:10.283307 systemd[1]: Started cri-containerd-523444cc8928d2cea8d9c4c57d2128fa8ca381561ab8725d74e869a925882739.scope - libcontainer container 523444cc8928d2cea8d9c4c57d2128fa8ca381561ab8725d74e869a925882739. Apr 30 00:38:10.308544 containerd[1740]: time="2025-04-30T00:38:10.308429952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2dn2z,Uid:7d613102-e717-4dee-9ad1-da3a309b388a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"523444cc8928d2cea8d9c4c57d2128fa8ca381561ab8725d74e869a925882739\"" Apr 30 00:38:10.311239 containerd[1740]: time="2025-04-30T00:38:10.311018273Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:38:12.546835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925336949.mount: Deactivated successfully. Apr 30 00:38:12.941925 containerd[1740]: time="2025-04-30T00:38:12.941814758Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:12.944861 containerd[1740]: time="2025-04-30T00:38:12.944720359Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:38:12.948728 containerd[1740]: time="2025-04-30T00:38:12.948679361Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:12.953561 containerd[1740]: time="2025-04-30T00:38:12.953519683Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:12.954460 containerd[1740]: time="2025-04-30T00:38:12.954315523Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.64326097s" Apr 30 00:38:12.954460 containerd[1740]: time="2025-04-30T00:38:12.954347843Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:38:12.956771 containerd[1740]: time="2025-04-30T00:38:12.956673164Z" level=info msg="CreateContainer within sandbox \"523444cc8928d2cea8d9c4c57d2128fa8ca381561ab8725d74e869a925882739\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:38:13.007215 containerd[1740]: time="2025-04-30T00:38:13.007148424Z" level=info msg="CreateContainer within sandbox \"523444cc8928d2cea8d9c4c57d2128fa8ca381561ab8725d74e869a925882739\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a71cd59ea9d27a2b77ce7106433f05cc0a24119d792e6ff09c33220426538ad0\"" Apr 30 00:38:13.008806 containerd[1740]: time="2025-04-30T00:38:13.007667264Z" level=info msg="StartContainer for \"a71cd59ea9d27a2b77ce7106433f05cc0a24119d792e6ff09c33220426538ad0\"" Apr 30 00:38:13.032349 systemd[1]: Started cri-containerd-a71cd59ea9d27a2b77ce7106433f05cc0a24119d792e6ff09c33220426538ad0.scope - libcontainer container a71cd59ea9d27a2b77ce7106433f05cc0a24119d792e6ff09c33220426538ad0. Apr 30 00:38:13.057527 containerd[1740]: time="2025-04-30T00:38:13.057482324Z" level=info msg="StartContainer for \"a71cd59ea9d27a2b77ce7106433f05cc0a24119d792e6ff09c33220426538ad0\" returns successfully" Apr 30 00:38:13.390226 kubelet[3217]: I0430 00:38:13.390170 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8wrdh" podStartSLOduration=4.390138458 podStartE2EDuration="4.390138458s" podCreationTimestamp="2025-04-30 00:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:10.387466343 +0000 UTC m=+16.188521810" watchObservedRunningTime="2025-04-30 00:38:13.390138458 +0000 UTC m=+19.191193925" Apr 30 00:38:17.825385 kubelet[3217]: I0430 00:38:17.825320 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-2dn2z" podStartSLOduration=6.179870473 podStartE2EDuration="8.825299963s" podCreationTimestamp="2025-04-30 00:38:09 +0000 UTC" firstStartedPulling="2025-04-30 00:38:10.309799433 +0000 UTC m=+16.110854860" lastFinishedPulling="2025-04-30 00:38:12.955228883 +0000 UTC m=+18.756284350" observedRunningTime="2025-04-30 00:38:13.391283859 +0000 UTC m=+19.192339326" watchObservedRunningTime="2025-04-30 00:38:17.825299963 +0000 UTC m=+23.626355430" Apr 30 00:38:17.825763 kubelet[3217]: I0430 00:38:17.825449 3217 topology_manager.go:215] "Topology Admit Handler" podUID="99cd04b2-8ce1-445f-b5bc-bb621fb7642a" podNamespace="calico-system" podName="calico-typha-655476868-qs576" Apr 30 00:38:17.841051 systemd[1]: Created slice kubepods-besteffort-pod99cd04b2_8ce1_445f_b5bc_bb621fb7642a.slice - libcontainer container kubepods-besteffort-pod99cd04b2_8ce1_445f_b5bc_bb621fb7642a.slice. Apr 30 00:38:17.933090 kubelet[3217]: I0430 00:38:17.933022 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99cd04b2-8ce1-445f-b5bc-bb621fb7642a-tigera-ca-bundle\") pod \"calico-typha-655476868-qs576\" (UID: \"99cd04b2-8ce1-445f-b5bc-bb621fb7642a\") " pod="calico-system/calico-typha-655476868-qs576" Apr 30 00:38:17.933090 kubelet[3217]: I0430 00:38:17.933074 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7r2\" (UniqueName: \"kubernetes.io/projected/99cd04b2-8ce1-445f-b5bc-bb621fb7642a-kube-api-access-4m7r2\") pod \"calico-typha-655476868-qs576\" (UID: \"99cd04b2-8ce1-445f-b5bc-bb621fb7642a\") " pod="calico-system/calico-typha-655476868-qs576" Apr 30 00:38:17.933090 kubelet[3217]: I0430 00:38:17.933097 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/99cd04b2-8ce1-445f-b5bc-bb621fb7642a-typha-certs\") pod \"calico-typha-655476868-qs576\" (UID: \"99cd04b2-8ce1-445f-b5bc-bb621fb7642a\") " pod="calico-system/calico-typha-655476868-qs576" Apr 30 00:38:17.997726 kubelet[3217]: I0430 00:38:17.997668 3217 topology_manager.go:215] "Topology Admit Handler" podUID="56d555e1-8f07-4d62-bb33-37c2bc79307e" podNamespace="calico-system" podName="calico-node-rdbwz" Apr 30 00:38:18.005962 systemd[1]: Created slice kubepods-besteffort-pod56d555e1_8f07_4d62_bb33_37c2bc79307e.slice - libcontainer container kubepods-besteffort-pod56d555e1_8f07_4d62_bb33_37c2bc79307e.slice. Apr 30 00:38:18.034210 kubelet[3217]: I0430 00:38:18.034133 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-cni-net-dir\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034210 kubelet[3217]: I0430 00:38:18.034187 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8sgr\" (UniqueName: \"kubernetes.io/projected/56d555e1-8f07-4d62-bb33-37c2bc79307e-kube-api-access-d8sgr\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034210 kubelet[3217]: I0430 00:38:18.034207 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-var-run-calico\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034916 kubelet[3217]: I0430 00:38:18.034225 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d555e1-8f07-4d62-bb33-37c2bc79307e-tigera-ca-bundle\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034916 kubelet[3217]: I0430 00:38:18.034242 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-cni-bin-dir\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034916 kubelet[3217]: I0430 00:38:18.034257 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-cni-log-dir\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034916 kubelet[3217]: I0430 00:38:18.034272 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-flexvol-driver-host\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.034916 kubelet[3217]: I0430 00:38:18.034324 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-lib-modules\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.035038 kubelet[3217]: I0430 00:38:18.034378 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56d555e1-8f07-4d62-bb33-37c2bc79307e-node-certs\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.035038 kubelet[3217]: I0430 00:38:18.034499 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-xtables-lock\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.035038 kubelet[3217]: I0430 00:38:18.034517 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-policysync\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.035038 kubelet[3217]: I0430 00:38:18.034533 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56d555e1-8f07-4d62-bb33-37c2bc79307e-var-lib-calico\") pod \"calico-node-rdbwz\" (UID: \"56d555e1-8f07-4d62-bb33-37c2bc79307e\") " pod="calico-system/calico-node-rdbwz" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136380 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.137674 kubelet[3217]: W0430 00:38:18.136405 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136439 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136608 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.137674 kubelet[3217]: W0430 00:38:18.136616 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136632 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136770 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.137674 kubelet[3217]: W0430 00:38:18.136777 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.137674 kubelet[3217]: E0430 00:38:18.136792 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.137959 kubelet[3217]: E0430 00:38:18.137716 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.137959 kubelet[3217]: W0430 00:38:18.137735 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.137959 kubelet[3217]: E0430 00:38:18.137761 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.137959 kubelet[3217]: E0430 00:38:18.137953 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.138044 kubelet[3217]: W0430 00:38:18.137961 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.138044 kubelet[3217]: E0430 00:38:18.137970 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.139115 kubelet[3217]: E0430 00:38:18.139039 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.139115 kubelet[3217]: W0430 00:38:18.139056 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.139115 kubelet[3217]: E0430 00:38:18.139068 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.149715 kubelet[3217]: E0430 00:38:18.148999 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.149715 kubelet[3217]: W0430 00:38:18.149019 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.149715 kubelet[3217]: E0430 00:38:18.149034 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.149830 containerd[1740]: time="2025-04-30T00:38:18.149381213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655476868-qs576,Uid:99cd04b2-8ce1-445f-b5bc-bb621fb7642a,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:18.152304 kubelet[3217]: E0430 00:38:18.152248 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.152304 kubelet[3217]: W0430 00:38:18.152261 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.152304 kubelet[3217]: E0430 00:38:18.152273 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.200265 containerd[1740]: time="2025-04-30T00:38:18.199892554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:18.200265 containerd[1740]: time="2025-04-30T00:38:18.199936914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:18.200265 containerd[1740]: time="2025-04-30T00:38:18.199996954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:18.204525 containerd[1740]: time="2025-04-30T00:38:18.203549955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:18.219125 kubelet[3217]: I0430 00:38:18.218836 3217 topology_manager.go:215] "Topology Admit Handler" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" podNamespace="calico-system" podName="csi-node-driver-7ppvf" Apr 30 00:38:18.221385 kubelet[3217]: E0430 00:38:18.221015 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:18.227311 systemd[1]: Started cri-containerd-a032d6849410c6e4f499ca69ab09b68370ee13460f666a3b67e49c63585de708.scope - libcontainer container a032d6849410c6e4f499ca69ab09b68370ee13460f666a3b67e49c63585de708. Apr 30 00:38:18.275519 containerd[1740]: time="2025-04-30T00:38:18.275484024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655476868-qs576,Uid:99cd04b2-8ce1-445f-b5bc-bb621fb7642a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a032d6849410c6e4f499ca69ab09b68370ee13460f666a3b67e49c63585de708\"" Apr 30 00:38:18.277682 containerd[1740]: time="2025-04-30T00:38:18.277470825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:38:18.311180 containerd[1740]: time="2025-04-30T00:38:18.310528438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rdbwz,Uid:56d555e1-8f07-4d62-bb33-37c2bc79307e,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:18.320198 kubelet[3217]: E0430 00:38:18.320174 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.320392 kubelet[3217]: W0430 00:38:18.320320 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.320392 kubelet[3217]: E0430 00:38:18.320346 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.320945 kubelet[3217]: E0430 00:38:18.320840 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.320945 kubelet[3217]: W0430 00:38:18.320854 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.320945 kubelet[3217]: E0430 00:38:18.320866 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.321162 kubelet[3217]: E0430 00:38:18.321118 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.321162 kubelet[3217]: W0430 00:38:18.321130 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.321162 kubelet[3217]: E0430 00:38:18.321141 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.322061 kubelet[3217]: E0430 00:38:18.321524 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.322061 kubelet[3217]: W0430 00:38:18.321538 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.322061 kubelet[3217]: E0430 00:38:18.321548 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.323165 kubelet[3217]: E0430 00:38:18.322684 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.323165 kubelet[3217]: W0430 00:38:18.323121 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.323165 kubelet[3217]: E0430 00:38:18.323144 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.323861 kubelet[3217]: E0430 00:38:18.323794 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.323861 kubelet[3217]: W0430 00:38:18.323806 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.323861 kubelet[3217]: E0430 00:38:18.323817 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.324197 kubelet[3217]: E0430 00:38:18.324113 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.324197 kubelet[3217]: W0430 00:38:18.324131 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.324197 kubelet[3217]: E0430 00:38:18.324141 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.324608 kubelet[3217]: E0430 00:38:18.324490 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.324608 kubelet[3217]: W0430 00:38:18.324502 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.324608 kubelet[3217]: E0430 00:38:18.324512 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.324823 kubelet[3217]: E0430 00:38:18.324770 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.324823 kubelet[3217]: W0430 00:38:18.324779 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.324823 kubelet[3217]: E0430 00:38:18.324788 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.325225 kubelet[3217]: E0430 00:38:18.325093 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.325225 kubelet[3217]: W0430 00:38:18.325103 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.325225 kubelet[3217]: E0430 00:38:18.325129 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.325476 kubelet[3217]: E0430 00:38:18.325373 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.325476 kubelet[3217]: W0430 00:38:18.325383 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.325476 kubelet[3217]: E0430 00:38:18.325392 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.325806 kubelet[3217]: E0430 00:38:18.325696 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.325806 kubelet[3217]: W0430 00:38:18.325715 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.325806 kubelet[3217]: E0430 00:38:18.325726 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.326044 kubelet[3217]: E0430 00:38:18.325953 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.326044 kubelet[3217]: W0430 00:38:18.325963 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.326044 kubelet[3217]: E0430 00:38:18.325973 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.326456 kubelet[3217]: E0430 00:38:18.326341 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.326456 kubelet[3217]: W0430 00:38:18.326353 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.326456 kubelet[3217]: E0430 00:38:18.326362 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.326679 kubelet[3217]: E0430 00:38:18.326615 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.326679 kubelet[3217]: W0430 00:38:18.326626 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.326679 kubelet[3217]: E0430 00:38:18.326635 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.327071 kubelet[3217]: E0430 00:38:18.326957 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.327071 kubelet[3217]: W0430 00:38:18.326975 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.327071 kubelet[3217]: E0430 00:38:18.326986 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.327373 kubelet[3217]: E0430 00:38:18.327212 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.327373 kubelet[3217]: W0430 00:38:18.327222 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.327373 kubelet[3217]: E0430 00:38:18.327231 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.327637 kubelet[3217]: E0430 00:38:18.327537 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.327637 kubelet[3217]: W0430 00:38:18.327549 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.327637 kubelet[3217]: E0430 00:38:18.327559 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.327836 kubelet[3217]: E0430 00:38:18.327780 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.327836 kubelet[3217]: W0430 00:38:18.327791 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.327836 kubelet[3217]: E0430 00:38:18.327801 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.328129 kubelet[3217]: E0430 00:38:18.328060 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.328129 kubelet[3217]: W0430 00:38:18.328070 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.328129 kubelet[3217]: E0430 00:38:18.328080 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.336627 kubelet[3217]: E0430 00:38:18.336566 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.336627 kubelet[3217]: W0430 00:38:18.336581 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.336627 kubelet[3217]: E0430 00:38:18.336593 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.336627 kubelet[3217]: I0430 00:38:18.336621 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb2e93f5-34f8-40e2-8427-80d1c7db355a-registration-dir\") pod \"csi-node-driver-7ppvf\" (UID: \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\") " pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:18.337379 kubelet[3217]: E0430 00:38:18.336808 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337379 kubelet[3217]: W0430 00:38:18.336819 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337379 kubelet[3217]: E0430 00:38:18.336836 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.337379 kubelet[3217]: I0430 00:38:18.336852 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb2e93f5-34f8-40e2-8427-80d1c7db355a-varrun\") pod \"csi-node-driver-7ppvf\" (UID: \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\") " pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:18.337379 kubelet[3217]: E0430 00:38:18.337048 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337379 kubelet[3217]: W0430 00:38:18.337058 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337379 kubelet[3217]: E0430 00:38:18.337082 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.337379 kubelet[3217]: I0430 00:38:18.337186 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdflv\" (UniqueName: \"kubernetes.io/projected/fb2e93f5-34f8-40e2-8427-80d1c7db355a-kube-api-access-bdflv\") pod \"csi-node-driver-7ppvf\" (UID: \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\") " pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:18.337379 kubelet[3217]: E0430 00:38:18.337316 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337568 kubelet[3217]: W0430 00:38:18.337328 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337568 kubelet[3217]: E0430 00:38:18.337340 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.337568 kubelet[3217]: E0430 00:38:18.337458 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337568 kubelet[3217]: W0430 00:38:18.337465 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337568 kubelet[3217]: E0430 00:38:18.337472 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.337671 kubelet[3217]: E0430 00:38:18.337612 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337671 kubelet[3217]: W0430 00:38:18.337619 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337671 kubelet[3217]: E0430 00:38:18.337629 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.337873 kubelet[3217]: E0430 00:38:18.337738 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.337873 kubelet[3217]: W0430 00:38:18.337761 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.337873 kubelet[3217]: E0430 00:38:18.337769 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.338454 kubelet[3217]: E0430 00:38:18.337994 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.338454 kubelet[3217]: W0430 00:38:18.338009 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.338454 kubelet[3217]: E0430 00:38:18.338028 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.338454 kubelet[3217]: I0430 00:38:18.338046 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb2e93f5-34f8-40e2-8427-80d1c7db355a-kubelet-dir\") pod \"csi-node-driver-7ppvf\" (UID: \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\") " pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:18.338454 kubelet[3217]: E0430 00:38:18.338248 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.338454 kubelet[3217]: W0430 00:38:18.338259 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.338454 kubelet[3217]: E0430 00:38:18.338287 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.338454 kubelet[3217]: I0430 00:38:18.338305 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb2e93f5-34f8-40e2-8427-80d1c7db355a-socket-dir\") pod \"csi-node-driver-7ppvf\" (UID: \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\") " pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:18.338956 kubelet[3217]: E0430 00:38:18.338819 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.338956 kubelet[3217]: W0430 00:38:18.338834 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.338956 kubelet[3217]: E0430 00:38:18.338866 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.339181 kubelet[3217]: E0430 00:38:18.339054 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.339181 kubelet[3217]: W0430 00:38:18.339068 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.339460 kubelet[3217]: E0430 00:38:18.339340 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.339772 kubelet[3217]: E0430 00:38:18.339646 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.339772 kubelet[3217]: W0430 00:38:18.339660 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.340006 kubelet[3217]: E0430 00:38:18.339926 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.340006 kubelet[3217]: E0430 00:38:18.339973 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.340006 kubelet[3217]: W0430 00:38:18.339984 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.340189 kubelet[3217]: E0430 00:38:18.340141 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.341259 kubelet[3217]: E0430 00:38:18.341234 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.341493 kubelet[3217]: W0430 00:38:18.341366 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.341493 kubelet[3217]: E0430 00:38:18.341385 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.341840 kubelet[3217]: E0430 00:38:18.341780 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.341840 kubelet[3217]: W0430 00:38:18.341793 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.341840 kubelet[3217]: E0430 00:38:18.341819 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.365914 containerd[1740]: time="2025-04-30T00:38:18.365636820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:18.365914 containerd[1740]: time="2025-04-30T00:38:18.365705220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:18.365914 containerd[1740]: time="2025-04-30T00:38:18.365719980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:18.365914 containerd[1740]: time="2025-04-30T00:38:18.365806540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:18.381314 systemd[1]: Started cri-containerd-c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331.scope - libcontainer container c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331. Apr 30 00:38:18.403574 containerd[1740]: time="2025-04-30T00:38:18.403471875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rdbwz,Uid:56d555e1-8f07-4d62-bb33-37c2bc79307e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\"" Apr 30 00:38:18.439037 kubelet[3217]: E0430 00:38:18.439001 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.439037 kubelet[3217]: W0430 00:38:18.439029 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.439226 kubelet[3217]: E0430 00:38:18.439049 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.439409 kubelet[3217]: E0430 00:38:18.439387 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.439409 kubelet[3217]: W0430 00:38:18.439406 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.439489 kubelet[3217]: E0430 00:38:18.439443 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.439757 kubelet[3217]: E0430 00:38:18.439737 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.439757 kubelet[3217]: W0430 00:38:18.439752 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.439883 kubelet[3217]: E0430 00:38:18.439862 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.440095 kubelet[3217]: E0430 00:38:18.440051 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.440258 kubelet[3217]: W0430 00:38:18.440226 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.440258 kubelet[3217]: E0430 00:38:18.440254 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.440916 kubelet[3217]: E0430 00:38:18.440521 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.440916 kubelet[3217]: W0430 00:38:18.440532 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.440916 kubelet[3217]: E0430 00:38:18.440543 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.440916 kubelet[3217]: E0430 00:38:18.440724 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.440916 kubelet[3217]: W0430 00:38:18.440738 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.440916 kubelet[3217]: E0430 00:38:18.440747 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.441101 kubelet[3217]: E0430 00:38:18.441082 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.441101 kubelet[3217]: W0430 00:38:18.441095 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.441305 kubelet[3217]: E0430 00:38:18.441277 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.441632 kubelet[3217]: E0430 00:38:18.441608 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.441632 kubelet[3217]: W0430 00:38:18.441626 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.441734 kubelet[3217]: E0430 00:38:18.441714 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.441906 kubelet[3217]: E0430 00:38:18.441881 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.442048 kubelet[3217]: W0430 00:38:18.441925 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.442048 kubelet[3217]: E0430 00:38:18.441963 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.442250 kubelet[3217]: E0430 00:38:18.442227 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.442302 kubelet[3217]: W0430 00:38:18.442269 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.442436 kubelet[3217]: E0430 00:38:18.442352 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.442677 kubelet[3217]: E0430 00:38:18.442552 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.442677 kubelet[3217]: W0430 00:38:18.442567 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.442677 kubelet[3217]: E0430 00:38:18.442633 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.442977 kubelet[3217]: E0430 00:38:18.442951 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.442977 kubelet[3217]: W0430 00:38:18.442967 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.443049 kubelet[3217]: E0430 00:38:18.442986 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.443448 kubelet[3217]: E0430 00:38:18.443418 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.443448 kubelet[3217]: W0430 00:38:18.443439 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.443448 kubelet[3217]: E0430 00:38:18.443460 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.443949 kubelet[3217]: E0430 00:38:18.443923 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.444019 kubelet[3217]: W0430 00:38:18.444008 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.444110 kubelet[3217]: E0430 00:38:18.444077 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.444460 kubelet[3217]: E0430 00:38:18.444416 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.444460 kubelet[3217]: W0430 00:38:18.444437 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.444460 kubelet[3217]: E0430 00:38:18.444455 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.444907 kubelet[3217]: E0430 00:38:18.444657 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.444907 kubelet[3217]: W0430 00:38:18.444681 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.444907 kubelet[3217]: E0430 00:38:18.444692 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.444907 kubelet[3217]: E0430 00:38:18.444906 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.445076 kubelet[3217]: W0430 00:38:18.444915 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.445076 kubelet[3217]: E0430 00:38:18.444927 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.445595 kubelet[3217]: E0430 00:38:18.445208 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.445595 kubelet[3217]: W0430 00:38:18.445222 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.445595 kubelet[3217]: E0430 00:38:18.445237 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.445595 kubelet[3217]: E0430 00:38:18.445453 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.445595 kubelet[3217]: W0430 00:38:18.445462 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.445595 kubelet[3217]: E0430 00:38:18.445478 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.445763 kubelet[3217]: E0430 00:38:18.445653 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.445763 kubelet[3217]: W0430 00:38:18.445662 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.445763 kubelet[3217]: E0430 00:38:18.445709 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.445973 kubelet[3217]: E0430 00:38:18.445947 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.445973 kubelet[3217]: W0430 00:38:18.445963 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.446046 kubelet[3217]: E0430 00:38:18.446032 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.446341 kubelet[3217]: E0430 00:38:18.446317 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.446341 kubelet[3217]: W0430 00:38:18.446335 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.446418 kubelet[3217]: E0430 00:38:18.446356 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.446615 kubelet[3217]: E0430 00:38:18.446546 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.446615 kubelet[3217]: W0430 00:38:18.446562 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.446615 kubelet[3217]: E0430 00:38:18.446593 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.446864 kubelet[3217]: E0430 00:38:18.446792 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.446864 kubelet[3217]: W0430 00:38:18.446823 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.446864 kubelet[3217]: E0430 00:38:18.446835 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.447133 kubelet[3217]: E0430 00:38:18.447046 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.447133 kubelet[3217]: W0430 00:38:18.447062 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.447133 kubelet[3217]: E0430 00:38:18.447072 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:18.457335 kubelet[3217]: E0430 00:38:18.457309 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:18.457454 kubelet[3217]: W0430 00:38:18.457411 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:18.457454 kubelet[3217]: E0430 00:38:18.457430 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:19.843558 containerd[1740]: time="2025-04-30T00:38:19.843511135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:19.845966 containerd[1740]: time="2025-04-30T00:38:19.845846256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:38:19.850249 containerd[1740]: time="2025-04-30T00:38:19.850217978Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:19.855711 containerd[1740]: time="2025-04-30T00:38:19.855656900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:19.856383 containerd[1740]: time="2025-04-30T00:38:19.856275500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.578774915s" Apr 30 00:38:19.856383 containerd[1740]: time="2025-04-30T00:38:19.856306220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:38:19.859043 containerd[1740]: time="2025-04-30T00:38:19.858319141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:38:19.866146 containerd[1740]: time="2025-04-30T00:38:19.866121944Z" level=info msg="CreateContainer within sandbox \"a032d6849410c6e4f499ca69ab09b68370ee13460f666a3b67e49c63585de708\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:38:19.928789 containerd[1740]: time="2025-04-30T00:38:19.928728889Z" level=info msg="CreateContainer within sandbox \"a032d6849410c6e4f499ca69ab09b68370ee13460f666a3b67e49c63585de708\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06adac37ae880d60ebfed0f22eb7164227015eb81f34a0de7889517edd1bb8cf\"" Apr 30 00:38:19.930222 containerd[1740]: time="2025-04-30T00:38:19.930040690Z" level=info msg="StartContainer for \"06adac37ae880d60ebfed0f22eb7164227015eb81f34a0de7889517edd1bb8cf\"" Apr 30 00:38:19.959411 systemd[1]: Started cri-containerd-06adac37ae880d60ebfed0f22eb7164227015eb81f34a0de7889517edd1bb8cf.scope - libcontainer container 06adac37ae880d60ebfed0f22eb7164227015eb81f34a0de7889517edd1bb8cf. Apr 30 00:38:19.991988 containerd[1740]: time="2025-04-30T00:38:19.991860075Z" level=info msg="StartContainer for \"06adac37ae880d60ebfed0f22eb7164227015eb81f34a0de7889517edd1bb8cf\" returns successfully" Apr 30 00:38:20.309930 kubelet[3217]: E0430 00:38:20.308481 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:20.441004 kubelet[3217]: E0430 00:38:20.440966 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.441329 kubelet[3217]: W0430 00:38:20.441267 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.441576 kubelet[3217]: E0430 00:38:20.441500 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.442077 kubelet[3217]: E0430 00:38:20.441929 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.442077 kubelet[3217]: W0430 00:38:20.441943 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.442077 kubelet[3217]: E0430 00:38:20.441963 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.442386 kubelet[3217]: E0430 00:38:20.442279 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.442386 kubelet[3217]: W0430 00:38:20.442303 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.442386 kubelet[3217]: E0430 00:38:20.442315 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.443262 kubelet[3217]: E0430 00:38:20.443137 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.443262 kubelet[3217]: W0430 00:38:20.443192 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.443262 kubelet[3217]: E0430 00:38:20.443206 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.443600 kubelet[3217]: E0430 00:38:20.443587 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.443943 kubelet[3217]: W0430 00:38:20.443644 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.443943 kubelet[3217]: E0430 00:38:20.443658 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.444427 kubelet[3217]: E0430 00:38:20.444281 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.444427 kubelet[3217]: W0430 00:38:20.444295 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.444427 kubelet[3217]: E0430 00:38:20.444307 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.445104 kubelet[3217]: E0430 00:38:20.444986 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.445104 kubelet[3217]: W0430 00:38:20.445000 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.445104 kubelet[3217]: E0430 00:38:20.445011 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.446533 kubelet[3217]: E0430 00:38:20.446346 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.446533 kubelet[3217]: W0430 00:38:20.446359 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.446533 kubelet[3217]: E0430 00:38:20.446371 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.447025 kubelet[3217]: E0430 00:38:20.446916 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.447025 kubelet[3217]: W0430 00:38:20.446930 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.447025 kubelet[3217]: E0430 00:38:20.446942 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.447352 kubelet[3217]: E0430 00:38:20.447338 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.447506 kubelet[3217]: W0430 00:38:20.447421 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.447506 kubelet[3217]: E0430 00:38:20.447437 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.447807 kubelet[3217]: E0430 00:38:20.447688 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.447807 kubelet[3217]: W0430 00:38:20.447699 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.447807 kubelet[3217]: E0430 00:38:20.447708 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.447974 kubelet[3217]: E0430 00:38:20.447962 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.448124 kubelet[3217]: W0430 00:38:20.448021 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.448124 kubelet[3217]: E0430 00:38:20.448038 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.448305 kubelet[3217]: E0430 00:38:20.448284 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.448363 kubelet[3217]: W0430 00:38:20.448353 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.448411 kubelet[3217]: E0430 00:38:20.448402 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.448665 kubelet[3217]: E0430 00:38:20.448652 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.448821 kubelet[3217]: W0430 00:38:20.448734 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.448821 kubelet[3217]: E0430 00:38:20.448752 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.448980 kubelet[3217]: E0430 00:38:20.448968 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.449047 kubelet[3217]: W0430 00:38:20.449037 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.449134 kubelet[3217]: E0430 00:38:20.449087 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.453513 kubelet[3217]: E0430 00:38:20.453422 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.453513 kubelet[3217]: W0430 00:38:20.453436 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.453513 kubelet[3217]: E0430 00:38:20.453448 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.454108 kubelet[3217]: E0430 00:38:20.454014 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.454108 kubelet[3217]: W0430 00:38:20.454026 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.454108 kubelet[3217]: E0430 00:38:20.454038 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.454836 kubelet[3217]: E0430 00:38:20.454677 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.454836 kubelet[3217]: W0430 00:38:20.454695 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.454836 kubelet[3217]: E0430 00:38:20.454718 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.455117 kubelet[3217]: E0430 00:38:20.455098 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.455117 kubelet[3217]: W0430 00:38:20.455114 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.455630 kubelet[3217]: E0430 00:38:20.455372 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.455630 kubelet[3217]: E0430 00:38:20.455439 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.455630 kubelet[3217]: W0430 00:38:20.455462 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.455630 kubelet[3217]: E0430 00:38:20.455603 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.455922 kubelet[3217]: E0430 00:38:20.455901 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.455922 kubelet[3217]: W0430 00:38:20.455917 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.456111 kubelet[3217]: E0430 00:38:20.455936 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.456234 kubelet[3217]: E0430 00:38:20.456217 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.456234 kubelet[3217]: W0430 00:38:20.456232 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.456418 kubelet[3217]: E0430 00:38:20.456248 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.456514 kubelet[3217]: E0430 00:38:20.456497 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.456514 kubelet[3217]: W0430 00:38:20.456511 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.456649 kubelet[3217]: E0430 00:38:20.456568 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.456803 kubelet[3217]: E0430 00:38:20.456786 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.456803 kubelet[3217]: W0430 00:38:20.456802 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.457045 kubelet[3217]: E0430 00:38:20.456866 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.457131 kubelet[3217]: E0430 00:38:20.457113 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.457131 kubelet[3217]: W0430 00:38:20.457129 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.457265 kubelet[3217]: E0430 00:38:20.457188 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.457423 kubelet[3217]: E0430 00:38:20.457406 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.457478 kubelet[3217]: W0430 00:38:20.457428 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.457478 kubelet[3217]: E0430 00:38:20.457446 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.457864 kubelet[3217]: E0430 00:38:20.457753 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.457864 kubelet[3217]: W0430 00:38:20.457767 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.457864 kubelet[3217]: E0430 00:38:20.457779 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.458040 kubelet[3217]: E0430 00:38:20.458015 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.458040 kubelet[3217]: W0430 00:38:20.458026 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.458271 kubelet[3217]: E0430 00:38:20.458125 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.458536 kubelet[3217]: E0430 00:38:20.458525 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.458726 kubelet[3217]: W0430 00:38:20.458587 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.458726 kubelet[3217]: E0430 00:38:20.458637 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.458864 kubelet[3217]: E0430 00:38:20.458853 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.458927 kubelet[3217]: W0430 00:38:20.458916 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.458993 kubelet[3217]: E0430 00:38:20.458982 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.459273 kubelet[3217]: E0430 00:38:20.459251 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.459273 kubelet[3217]: W0430 00:38:20.459270 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.459384 kubelet[3217]: E0430 00:38:20.459291 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.459464 kubelet[3217]: E0430 00:38:20.459448 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.459464 kubelet[3217]: W0430 00:38:20.459461 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.459519 kubelet[3217]: E0430 00:38:20.459470 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:20.459867 kubelet[3217]: E0430 00:38:20.459849 3217 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:38:20.459867 kubelet[3217]: W0430 00:38:20.459864 3217 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:38:20.459938 kubelet[3217]: E0430 00:38:20.459875 3217 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:38:21.118337 containerd[1740]: time="2025-04-30T00:38:21.117595208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:21.121178 containerd[1740]: time="2025-04-30T00:38:21.121137529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:38:21.123638 containerd[1740]: time="2025-04-30T00:38:21.123583730Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:21.128408 containerd[1740]: time="2025-04-30T00:38:21.128369612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:21.129189 containerd[1740]: time="2025-04-30T00:38:21.129034772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.270684151s" Apr 30 00:38:21.129189 containerd[1740]: time="2025-04-30T00:38:21.129066052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:38:21.131933 containerd[1740]: time="2025-04-30T00:38:21.131871373Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:38:21.175994 containerd[1740]: time="2025-04-30T00:38:21.175868071Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e\"" Apr 30 00:38:21.177573 containerd[1740]: time="2025-04-30T00:38:21.177449872Z" level=info msg="StartContainer for \"6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e\"" Apr 30 00:38:21.226324 systemd[1]: Started cri-containerd-6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e.scope - libcontainer container 6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e. Apr 30 00:38:21.275195 containerd[1740]: time="2025-04-30T00:38:21.275103271Z" level=info msg="StartContainer for \"6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e\" returns successfully" Apr 30 00:38:21.285684 systemd[1]: cri-containerd-6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e.scope: Deactivated successfully. Apr 30 00:38:21.304812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e-rootfs.mount: Deactivated successfully. Apr 30 00:38:21.401101 kubelet[3217]: I0430 00:38:21.400829 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:21.419578 kubelet[3217]: I0430 00:38:21.418831 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-655476868-qs576" podStartSLOduration=2.838627414 podStartE2EDuration="4.418812369s" podCreationTimestamp="2025-04-30 00:38:17 +0000 UTC" firstStartedPulling="2025-04-30 00:38:18.277046505 +0000 UTC m=+24.078101972" lastFinishedPulling="2025-04-30 00:38:19.85723146 +0000 UTC m=+25.658286927" observedRunningTime="2025-04-30 00:38:20.413358964 +0000 UTC m=+26.214414431" watchObservedRunningTime="2025-04-30 00:38:21.418812369 +0000 UTC m=+27.219867836" Apr 30 00:38:22.193614 containerd[1740]: time="2025-04-30T00:38:22.193336000Z" level=info msg="shim disconnected" id=6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e namespace=k8s.io Apr 30 00:38:22.193614 containerd[1740]: time="2025-04-30T00:38:22.193605641Z" level=warning msg="cleaning up after shim disconnected" id=6db07df8231dc3a91be553531e9067e2b0a3f733b2ba82096bf60e0d848f5d9e namespace=k8s.io Apr 30 00:38:22.193614 containerd[1740]: time="2025-04-30T00:38:22.193618481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:38:22.309337 kubelet[3217]: E0430 00:38:22.308213 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:22.411069 containerd[1740]: time="2025-04-30T00:38:22.411029168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:38:24.308539 kubelet[3217]: E0430 00:38:24.307567 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:25.447505 containerd[1740]: time="2025-04-30T00:38:25.447438830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:25.450113 containerd[1740]: time="2025-04-30T00:38:25.450057431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:38:25.453710 containerd[1740]: time="2025-04-30T00:38:25.453397352Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:25.459492 containerd[1740]: time="2025-04-30T00:38:25.459234435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:25.460278 containerd[1740]: time="2025-04-30T00:38:25.460114635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.049046707s" Apr 30 00:38:25.460278 containerd[1740]: time="2025-04-30T00:38:25.460148435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:38:25.463465 containerd[1740]: time="2025-04-30T00:38:25.463294356Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:38:25.516390 containerd[1740]: time="2025-04-30T00:38:25.516346618Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce\"" Apr 30 00:38:25.516768 containerd[1740]: time="2025-04-30T00:38:25.516744898Z" level=info msg="StartContainer for \"5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce\"" Apr 30 00:38:25.546299 systemd[1]: Started cri-containerd-5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce.scope - libcontainer container 5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce. Apr 30 00:38:25.578178 containerd[1740]: time="2025-04-30T00:38:25.578072083Z" level=info msg="StartContainer for \"5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce\" returns successfully" Apr 30 00:38:26.308564 kubelet[3217]: E0430 00:38:26.307463 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:26.639261 systemd[1]: cri-containerd-5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce.scope: Deactivated successfully. Apr 30 00:38:26.652963 kubelet[3217]: I0430 00:38:26.652260 3217 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:38:26.664697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce-rootfs.mount: Deactivated successfully. Apr 30 00:38:26.694442 kubelet[3217]: I0430 00:38:26.694134 3217 topology_manager.go:215] "Topology Admit Handler" podUID="39b31d14-adbd-40cc-aaae-630914635b7c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xhxlm" Apr 30 00:38:27.023736 kubelet[3217]: I0430 00:38:26.702461 3217 topology_manager.go:215] "Topology Admit Handler" podUID="538a6392-809d-4846-83ca-90e20dd564a7" podNamespace="calico-system" podName="calico-kube-controllers-854657b6f6-9ld68" Apr 30 00:38:27.023736 kubelet[3217]: I0430 00:38:26.703278 3217 topology_manager.go:215] "Topology Admit Handler" podUID="d52508d6-cb87-4a3e-bc62-6a667d5c126a" podNamespace="calico-apiserver" podName="calico-apiserver-68866747c9-jg8pz" Apr 30 00:38:27.023736 kubelet[3217]: I0430 00:38:26.704280 3217 topology_manager.go:215] "Topology Admit Handler" podUID="ebe487b6-7b72-4107-a215-c47c7bf75a1e" podNamespace="calico-apiserver" podName="calico-apiserver-68866747c9-tfllp" Apr 30 00:38:27.023736 kubelet[3217]: I0430 00:38:26.705642 3217 topology_manager.go:215] "Topology Admit Handler" podUID="bc57709c-30bf-43ea-8c23-9eaa31163a6e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v2wb9" Apr 30 00:38:27.023736 kubelet[3217]: W0430 00:38:26.711378 3217 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-a-8ba35441fd" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-8ba35441fd' and this object Apr 30 00:38:27.023736 kubelet[3217]: E0430 00:38:26.711413 3217 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.3-a-8ba35441fd" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-8ba35441fd' and this object Apr 30 00:38:27.023736 kubelet[3217]: W0430 00:38:26.711439 3217 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.3-a-8ba35441fd" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-8ba35441fd' and this object Apr 30 00:38:26.708423 systemd[1]: Created slice kubepods-burstable-pod39b31d14_adbd_40cc_aaae_630914635b7c.slice - libcontainer container kubepods-burstable-pod39b31d14_adbd_40cc_aaae_630914635b7c.slice. Apr 30 00:38:27.024023 kubelet[3217]: E0430 00:38:26.711449 3217 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.3-a-8ba35441fd" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.3-a-8ba35441fd' and this object Apr 30 00:38:27.024023 kubelet[3217]: I0430 00:38:26.894441 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfm4\" (UniqueName: \"kubernetes.io/projected/538a6392-809d-4846-83ca-90e20dd564a7-kube-api-access-mwfm4\") pod \"calico-kube-controllers-854657b6f6-9ld68\" (UID: \"538a6392-809d-4846-83ca-90e20dd564a7\") " pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" Apr 30 00:38:27.024023 kubelet[3217]: I0430 00:38:26.894483 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kh2c\" (UniqueName: \"kubernetes.io/projected/bc57709c-30bf-43ea-8c23-9eaa31163a6e-kube-api-access-7kh2c\") pod \"coredns-7db6d8ff4d-v2wb9\" (UID: \"bc57709c-30bf-43ea-8c23-9eaa31163a6e\") " pod="kube-system/coredns-7db6d8ff4d-v2wb9" Apr 30 00:38:27.024023 kubelet[3217]: I0430 00:38:26.894501 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gtzv\" (UniqueName: \"kubernetes.io/projected/39b31d14-adbd-40cc-aaae-630914635b7c-kube-api-access-4gtzv\") pod \"coredns-7db6d8ff4d-xhxlm\" (UID: \"39b31d14-adbd-40cc-aaae-630914635b7c\") " pod="kube-system/coredns-7db6d8ff4d-xhxlm" Apr 30 00:38:27.024023 kubelet[3217]: I0430 00:38:26.894520 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvbn\" (UniqueName: \"kubernetes.io/projected/d52508d6-cb87-4a3e-bc62-6a667d5c126a-kube-api-access-gtvbn\") pod \"calico-apiserver-68866747c9-jg8pz\" (UID: \"d52508d6-cb87-4a3e-bc62-6a667d5c126a\") " pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" Apr 30 00:38:26.717302 systemd[1]: Created slice kubepods-besteffort-podd52508d6_cb87_4a3e_bc62_6a667d5c126a.slice - libcontainer container kubepods-besteffort-podd52508d6_cb87_4a3e_bc62_6a667d5c126a.slice. Apr 30 00:38:27.024196 kubelet[3217]: I0430 00:38:26.894538 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebe487b6-7b72-4107-a215-c47c7bf75a1e-calico-apiserver-certs\") pod \"calico-apiserver-68866747c9-tfllp\" (UID: \"ebe487b6-7b72-4107-a215-c47c7bf75a1e\") " pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" Apr 30 00:38:27.024196 kubelet[3217]: I0430 00:38:26.894557 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d52508d6-cb87-4a3e-bc62-6a667d5c126a-calico-apiserver-certs\") pod \"calico-apiserver-68866747c9-jg8pz\" (UID: \"d52508d6-cb87-4a3e-bc62-6a667d5c126a\") " pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" Apr 30 00:38:27.024196 kubelet[3217]: I0430 00:38:26.894577 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc57709c-30bf-43ea-8c23-9eaa31163a6e-config-volume\") pod \"coredns-7db6d8ff4d-v2wb9\" (UID: \"bc57709c-30bf-43ea-8c23-9eaa31163a6e\") " pod="kube-system/coredns-7db6d8ff4d-v2wb9" Apr 30 00:38:27.024196 kubelet[3217]: I0430 00:38:26.894595 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39b31d14-adbd-40cc-aaae-630914635b7c-config-volume\") pod \"coredns-7db6d8ff4d-xhxlm\" (UID: \"39b31d14-adbd-40cc-aaae-630914635b7c\") " pod="kube-system/coredns-7db6d8ff4d-xhxlm" Apr 30 00:38:27.024196 kubelet[3217]: I0430 00:38:26.894610 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538a6392-809d-4846-83ca-90e20dd564a7-tigera-ca-bundle\") pod \"calico-kube-controllers-854657b6f6-9ld68\" (UID: \"538a6392-809d-4846-83ca-90e20dd564a7\") " pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" Apr 30 00:38:26.723121 systemd[1]: Created slice kubepods-besteffort-pod538a6392_809d_4846_83ca_90e20dd564a7.slice - libcontainer container kubepods-besteffort-pod538a6392_809d_4846_83ca_90e20dd564a7.slice. Apr 30 00:38:27.024342 kubelet[3217]: I0430 00:38:26.894627 3217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsfj\" (UniqueName: \"kubernetes.io/projected/ebe487b6-7b72-4107-a215-c47c7bf75a1e-kube-api-access-lrsfj\") pod \"calico-apiserver-68866747c9-tfllp\" (UID: \"ebe487b6-7b72-4107-a215-c47c7bf75a1e\") " pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" Apr 30 00:38:26.729791 systemd[1]: Created slice kubepods-besteffort-podebe487b6_7b72_4107_a215_c47c7bf75a1e.slice - libcontainer container kubepods-besteffort-podebe487b6_7b72_4107_a215_c47c7bf75a1e.slice. Apr 30 00:38:26.736719 systemd[1]: Created slice kubepods-burstable-podbc57709c_30bf_43ea_8c23_9eaa31163a6e.slice - libcontainer container kubepods-burstable-podbc57709c_30bf_43ea_8c23_9eaa31163a6e.slice. Apr 30 00:38:27.764129 containerd[1740]: time="2025-04-30T00:38:27.764023565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhxlm,Uid:39b31d14-adbd-40cc-aaae-630914635b7c,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:27.765180 containerd[1740]: time="2025-04-30T00:38:27.764903525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854657b6f6-9ld68,Uid:538a6392-809d-4846-83ca-90e20dd564a7,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:27.769058 containerd[1740]: time="2025-04-30T00:38:27.768860407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2wb9,Uid:bc57709c-30bf-43ea-8c23-9eaa31163a6e,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:27.896661 containerd[1740]: time="2025-04-30T00:38:27.896400260Z" level=info msg="shim disconnected" id=5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce namespace=k8s.io Apr 30 00:38:27.896661 containerd[1740]: time="2025-04-30T00:38:27.896546100Z" level=warning msg="cleaning up after shim disconnected" id=5391e549a6ceca148be0fb737a18bfcdb2b916d3d5ddaae303690c5d4b1475ce namespace=k8s.io Apr 30 00:38:27.896661 containerd[1740]: time="2025-04-30T00:38:27.896557820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:38:27.907447 containerd[1740]: time="2025-04-30T00:38:27.907382424Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:38:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:38:27.996790 kubelet[3217]: E0430 00:38:27.996621 3217 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 30 00:38:27.996790 kubelet[3217]: E0430 00:38:27.996715 3217 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d52508d6-cb87-4a3e-bc62-6a667d5c126a-calico-apiserver-certs podName:d52508d6-cb87-4a3e-bc62-6a667d5c126a nodeName:}" failed. No retries permitted until 2025-04-30 00:38:28.496693942 +0000 UTC m=+34.297749409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d52508d6-cb87-4a3e-bc62-6a667d5c126a-calico-apiserver-certs") pod "calico-apiserver-68866747c9-jg8pz" (UID: "d52508d6-cb87-4a3e-bc62-6a667d5c126a") : failed to sync secret cache: timed out waiting for the condition Apr 30 00:38:27.996790 kubelet[3217]: E0430 00:38:27.996618 3217 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 30 00:38:27.996790 kubelet[3217]: E0430 00:38:27.996764 3217 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebe487b6-7b72-4107-a215-c47c7bf75a1e-calico-apiserver-certs podName:ebe487b6-7b72-4107-a215-c47c7bf75a1e nodeName:}" failed. No retries permitted until 2025-04-30 00:38:28.496752822 +0000 UTC m=+34.297808289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ebe487b6-7b72-4107-a215-c47c7bf75a1e-calico-apiserver-certs") pod "calico-apiserver-68866747c9-tfllp" (UID: "ebe487b6-7b72-4107-a215-c47c7bf75a1e") : failed to sync secret cache: timed out waiting for the condition Apr 30 00:38:28.002930 kubelet[3217]: E0430 00:38:28.002875 3217 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.003351 kubelet[3217]: E0430 00:38:28.003243 3217 projected.go:200] Error preparing data for projected volume kube-api-access-gtvbn for pod calico-apiserver/calico-apiserver-68866747c9-jg8pz: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.003351 kubelet[3217]: E0430 00:38:28.003324 3217 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d52508d6-cb87-4a3e-bc62-6a667d5c126a-kube-api-access-gtvbn podName:d52508d6-cb87-4a3e-bc62-6a667d5c126a nodeName:}" failed. No retries permitted until 2025-04-30 00:38:28.503303304 +0000 UTC m=+34.304358731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gtvbn" (UniqueName: "kubernetes.io/projected/d52508d6-cb87-4a3e-bc62-6a667d5c126a-kube-api-access-gtvbn") pod "calico-apiserver-68866747c9-jg8pz" (UID: "d52508d6-cb87-4a3e-bc62-6a667d5c126a") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.004239 kubelet[3217]: E0430 00:38:28.004211 3217 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.004239 kubelet[3217]: E0430 00:38:28.004240 3217 projected.go:200] Error preparing data for projected volume kube-api-access-lrsfj for pod calico-apiserver/calico-apiserver-68866747c9-tfllp: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.004430 kubelet[3217]: E0430 00:38:28.004288 3217 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebe487b6-7b72-4107-a215-c47c7bf75a1e-kube-api-access-lrsfj podName:ebe487b6-7b72-4107-a215-c47c7bf75a1e nodeName:}" failed. No retries permitted until 2025-04-30 00:38:28.504275345 +0000 UTC m=+34.305330812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lrsfj" (UniqueName: "kubernetes.io/projected/ebe487b6-7b72-4107-a215-c47c7bf75a1e-kube-api-access-lrsfj") pod "calico-apiserver-68866747c9-tfllp" (UID: "ebe487b6-7b72-4107-a215-c47c7bf75a1e") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:38:28.081252 containerd[1740]: time="2025-04-30T00:38:28.080409856Z" level=error msg="Failed to destroy network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.081252 containerd[1740]: time="2025-04-30T00:38:28.080682736Z" level=error msg="encountered an error cleaning up failed sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.081252 containerd[1740]: time="2025-04-30T00:38:28.080726736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2wb9,Uid:bc57709c-30bf-43ea-8c23-9eaa31163a6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.081416 kubelet[3217]: E0430 00:38:28.080959 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.081416 kubelet[3217]: E0430 00:38:28.081044 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v2wb9" Apr 30 00:38:28.081416 kubelet[3217]: E0430 00:38:28.081064 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-v2wb9" Apr 30 00:38:28.081505 kubelet[3217]: E0430 00:38:28.081123 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-v2wb9_kube-system(bc57709c-30bf-43ea-8c23-9eaa31163a6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-v2wb9_kube-system(bc57709c-30bf-43ea-8c23-9eaa31163a6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v2wb9" podUID="bc57709c-30bf-43ea-8c23-9eaa31163a6e" Apr 30 00:38:28.085123 containerd[1740]: time="2025-04-30T00:38:28.085082618Z" level=error msg="Failed to destroy network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.085403 containerd[1740]: time="2025-04-30T00:38:28.085371898Z" level=error msg="encountered an error cleaning up failed sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.085449 containerd[1740]: time="2025-04-30T00:38:28.085425418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhxlm,Uid:39b31d14-adbd-40cc-aaae-630914635b7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.085602 kubelet[3217]: E0430 00:38:28.085571 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.085645 kubelet[3217]: E0430 00:38:28.085616 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xhxlm" Apr 30 00:38:28.085645 kubelet[3217]: E0430 00:38:28.085634 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xhxlm" Apr 30 00:38:28.085697 kubelet[3217]: E0430 00:38:28.085663 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xhxlm_kube-system(39b31d14-adbd-40cc-aaae-630914635b7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xhxlm_kube-system(39b31d14-adbd-40cc-aaae-630914635b7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xhxlm" podUID="39b31d14-adbd-40cc-aaae-630914635b7c" Apr 30 00:38:28.105520 containerd[1740]: time="2025-04-30T00:38:28.105469427Z" level=error msg="Failed to destroy network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.105772 containerd[1740]: time="2025-04-30T00:38:28.105744307Z" level=error msg="encountered an error cleaning up failed sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.105822 containerd[1740]: time="2025-04-30T00:38:28.105796747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854657b6f6-9ld68,Uid:538a6392-809d-4846-83ca-90e20dd564a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.106056 kubelet[3217]: E0430 00:38:28.106005 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.106127 kubelet[3217]: E0430 00:38:28.106069 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" Apr 30 00:38:28.106127 kubelet[3217]: E0430 00:38:28.106089 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" Apr 30 00:38:28.106433 kubelet[3217]: E0430 00:38:28.106126 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854657b6f6-9ld68_calico-system(538a6392-809d-4846-83ca-90e20dd564a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854657b6f6-9ld68_calico-system(538a6392-809d-4846-83ca-90e20dd564a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" podUID="538a6392-809d-4846-83ca-90e20dd564a7" Apr 30 00:38:28.314176 systemd[1]: Created slice kubepods-besteffort-podfb2e93f5_34f8_40e2_8427_80d1c7db355a.slice - libcontainer container kubepods-besteffort-podfb2e93f5_34f8_40e2_8427_80d1c7db355a.slice. Apr 30 00:38:28.316468 containerd[1740]: time="2025-04-30T00:38:28.316424434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ppvf,Uid:fb2e93f5-34f8-40e2-8427-80d1c7db355a,Namespace:calico-system,Attempt:0,}" Apr 30 00:38:28.397678 containerd[1740]: time="2025-04-30T00:38:28.397563148Z" level=error msg="Failed to destroy network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.398133 containerd[1740]: time="2025-04-30T00:38:28.397842788Z" level=error msg="encountered an error cleaning up failed sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.398133 containerd[1740]: time="2025-04-30T00:38:28.397893628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ppvf,Uid:fb2e93f5-34f8-40e2-8427-80d1c7db355a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.398620 kubelet[3217]: E0430 00:38:28.398080 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.398620 kubelet[3217]: E0430 00:38:28.398134 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:28.398620 kubelet[3217]: E0430 00:38:28.398241 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7ppvf" Apr 30 00:38:28.398727 kubelet[3217]: E0430 00:38:28.398306 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7ppvf_calico-system(fb2e93f5-34f8-40e2-8427-80d1c7db355a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7ppvf_calico-system(fb2e93f5-34f8-40e2-8427-80d1c7db355a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:28.424014 kubelet[3217]: I0430 00:38:28.423498 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:28.424568 containerd[1740]: time="2025-04-30T00:38:28.424389679Z" level=info msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" Apr 30 00:38:28.424785 containerd[1740]: time="2025-04-30T00:38:28.424764439Z" level=info msg="Ensure that sandbox 0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c in task-service has been cleanup successfully" Apr 30 00:38:28.425606 kubelet[3217]: I0430 00:38:28.425578 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:28.426404 containerd[1740]: time="2025-04-30T00:38:28.426365400Z" level=info msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" Apr 30 00:38:28.426560 containerd[1740]: time="2025-04-30T00:38:28.426499680Z" level=info msg="Ensure that sandbox 747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60 in task-service has been cleanup successfully" Apr 30 00:38:28.429338 kubelet[3217]: I0430 00:38:28.428600 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:28.432002 containerd[1740]: time="2025-04-30T00:38:28.431978042Z" level=info msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" Apr 30 00:38:28.432854 containerd[1740]: time="2025-04-30T00:38:28.432592123Z" level=info msg="Ensure that sandbox b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27 in task-service has been cleanup successfully" Apr 30 00:38:28.437586 containerd[1740]: time="2025-04-30T00:38:28.437558045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:38:28.439498 kubelet[3217]: I0430 00:38:28.439363 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:28.442139 containerd[1740]: time="2025-04-30T00:38:28.442109647Z" level=info msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" Apr 30 00:38:28.442632 containerd[1740]: time="2025-04-30T00:38:28.442392727Z" level=info msg="Ensure that sandbox b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be in task-service has been cleanup successfully" Apr 30 00:38:28.485104 containerd[1740]: time="2025-04-30T00:38:28.485059104Z" level=error msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" failed" error="failed to destroy network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.485710 kubelet[3217]: E0430 00:38:28.485662 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:28.485787 kubelet[3217]: E0430 00:38:28.485718 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c"} Apr 30 00:38:28.485787 kubelet[3217]: E0430 00:38:28.485779 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39b31d14-adbd-40cc-aaae-630914635b7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:28.485868 kubelet[3217]: E0430 00:38:28.485800 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39b31d14-adbd-40cc-aaae-630914635b7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xhxlm" podUID="39b31d14-adbd-40cc-aaae-630914635b7c" Apr 30 00:38:28.503274 containerd[1740]: time="2025-04-30T00:38:28.503071792Z" level=error msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" failed" error="failed to destroy network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.503536 kubelet[3217]: E0430 00:38:28.503308 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:28.503536 kubelet[3217]: E0430 00:38:28.503359 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60"} Apr 30 00:38:28.503536 kubelet[3217]: E0430 00:38:28.503392 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:28.503536 kubelet[3217]: E0430 00:38:28.503412 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb2e93f5-34f8-40e2-8427-80d1c7db355a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7ppvf" podUID="fb2e93f5-34f8-40e2-8427-80d1c7db355a" Apr 30 00:38:28.509047 containerd[1740]: time="2025-04-30T00:38:28.508996434Z" level=error msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" failed" error="failed to destroy network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.510850 kubelet[3217]: E0430 00:38:28.510816 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:28.510850 kubelet[3217]: E0430 00:38:28.510850 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27"} Apr 30 00:38:28.510955 kubelet[3217]: E0430 00:38:28.510879 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"538a6392-809d-4846-83ca-90e20dd564a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:28.510955 kubelet[3217]: E0430 00:38:28.510897 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"538a6392-809d-4846-83ca-90e20dd564a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" podUID="538a6392-809d-4846-83ca-90e20dd564a7" Apr 30 00:38:28.515009 containerd[1740]: time="2025-04-30T00:38:28.514962077Z" level=error msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" failed" error="failed to destroy network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.515144 kubelet[3217]: E0430 00:38:28.515106 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:28.515270 kubelet[3217]: E0430 00:38:28.515145 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be"} Apr 30 00:38:28.515270 kubelet[3217]: E0430 00:38:28.515204 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc57709c-30bf-43ea-8c23-9eaa31163a6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:28.515270 kubelet[3217]: E0430 00:38:28.515222 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc57709c-30bf-43ea-8c23-9eaa31163a6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-v2wb9" podUID="bc57709c-30bf-43ea-8c23-9eaa31163a6e" Apr 30 00:38:28.669086 containerd[1740]: time="2025-04-30T00:38:28.668924901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-tfllp,Uid:ebe487b6-7b72-4107-a215-c47c7bf75a1e,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:38:28.673377 containerd[1740]: time="2025-04-30T00:38:28.673341503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-jg8pz,Uid:d52508d6-cb87-4a3e-bc62-6a667d5c126a,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:38:28.772073 containerd[1740]: time="2025-04-30T00:38:28.771935264Z" level=error msg="Failed to destroy network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.772981 containerd[1740]: time="2025-04-30T00:38:28.772952584Z" level=error msg="encountered an error cleaning up failed sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.773248 containerd[1740]: time="2025-04-30T00:38:28.773220904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-tfllp,Uid:ebe487b6-7b72-4107-a215-c47c7bf75a1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.775077 kubelet[3217]: E0430 00:38:28.774150 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.775077 kubelet[3217]: E0430 00:38:28.774942 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" Apr 30 00:38:28.775077 kubelet[3217]: E0430 00:38:28.774964 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" Apr 30 00:38:28.775326 kubelet[3217]: E0430 00:38:28.775022 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68866747c9-tfllp_calico-apiserver(ebe487b6-7b72-4107-a215-c47c7bf75a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68866747c9-tfllp_calico-apiserver(ebe487b6-7b72-4107-a215-c47c7bf75a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" podUID="ebe487b6-7b72-4107-a215-c47c7bf75a1e" Apr 30 00:38:28.801393 containerd[1740]: time="2025-04-30T00:38:28.801351276Z" level=error msg="Failed to destroy network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.801780 containerd[1740]: time="2025-04-30T00:38:28.801756756Z" level=error msg="encountered an error cleaning up failed sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.801910 containerd[1740]: time="2025-04-30T00:38:28.801878996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-jg8pz,Uid:d52508d6-cb87-4a3e-bc62-6a667d5c126a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.802255 kubelet[3217]: E0430 00:38:28.802215 3217 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:28.802427 kubelet[3217]: E0430 00:38:28.802408 3217 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" Apr 30 00:38:28.802547 kubelet[3217]: E0430 00:38:28.802479 3217 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" Apr 30 00:38:28.802622 kubelet[3217]: E0430 00:38:28.802534 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68866747c9-jg8pz_calico-apiserver(d52508d6-cb87-4a3e-bc62-6a667d5c126a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68866747c9-jg8pz_calico-apiserver(d52508d6-cb87-4a3e-bc62-6a667d5c126a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" podUID="d52508d6-cb87-4a3e-bc62-6a667d5c126a" Apr 30 00:38:28.983769 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27-shm.mount: Deactivated successfully. Apr 30 00:38:28.983864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c-shm.mount: Deactivated successfully. Apr 30 00:38:28.983910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be-shm.mount: Deactivated successfully. Apr 30 00:38:29.441802 kubelet[3217]: I0430 00:38:29.441697 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:29.442754 containerd[1740]: time="2025-04-30T00:38:29.442376982Z" level=info msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" Apr 30 00:38:29.442754 containerd[1740]: time="2025-04-30T00:38:29.442537742Z" level=info msg="Ensure that sandbox fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96 in task-service has been cleanup successfully" Apr 30 00:38:29.445924 kubelet[3217]: I0430 00:38:29.445525 3217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:29.446002 containerd[1740]: time="2025-04-30T00:38:29.445907024Z" level=info msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" Apr 30 00:38:29.446081 containerd[1740]: time="2025-04-30T00:38:29.446047544Z" level=info msg="Ensure that sandbox 5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b in task-service has been cleanup successfully" Apr 30 00:38:29.471297 containerd[1740]: time="2025-04-30T00:38:29.471244754Z" level=error msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" failed" error="failed to destroy network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:29.471515 kubelet[3217]: E0430 00:38:29.471478 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:29.471565 kubelet[3217]: E0430 00:38:29.471541 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96"} Apr 30 00:38:29.471608 kubelet[3217]: E0430 00:38:29.471575 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d52508d6-cb87-4a3e-bc62-6a667d5c126a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:29.472031 kubelet[3217]: E0430 00:38:29.471999 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d52508d6-cb87-4a3e-bc62-6a667d5c126a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" podUID="d52508d6-cb87-4a3e-bc62-6a667d5c126a" Apr 30 00:38:29.473568 containerd[1740]: time="2025-04-30T00:38:29.473514315Z" level=error msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" failed" error="failed to destroy network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:38:29.473726 kubelet[3217]: E0430 00:38:29.473697 3217 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:29.473765 kubelet[3217]: E0430 00:38:29.473746 3217 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b"} Apr 30 00:38:29.473803 kubelet[3217]: E0430 00:38:29.473769 3217 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebe487b6-7b72-4107-a215-c47c7bf75a1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:38:29.473803 kubelet[3217]: E0430 00:38:29.473786 3217 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebe487b6-7b72-4107-a215-c47c7bf75a1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" podUID="ebe487b6-7b72-4107-a215-c47c7bf75a1e" Apr 30 00:38:32.547809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077599476.mount: Deactivated successfully. Apr 30 00:38:32.751068 containerd[1740]: time="2025-04-30T00:38:32.750388837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:32.752708 containerd[1740]: time="2025-04-30T00:38:32.752679958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:38:32.756037 containerd[1740]: time="2025-04-30T00:38:32.755987999Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:32.761876 containerd[1740]: time="2025-04-30T00:38:32.760899921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:32.761876 containerd[1740]: time="2025-04-30T00:38:32.761541802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.323750917s" Apr 30 00:38:32.761876 containerd[1740]: time="2025-04-30T00:38:32.761567122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:38:32.774778 containerd[1740]: time="2025-04-30T00:38:32.774752047Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:38:32.817728 containerd[1740]: time="2025-04-30T00:38:32.817631225Z" level=info msg="CreateContainer within sandbox \"c88516eb2d7ec71d65783e666825c8a69b2708414ccf696e5bf04af971340331\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e533e9cadce1c0f378fb5e797c7a970c1e6386c72cda8f55955696dfd75e1dae\"" Apr 30 00:38:32.818850 containerd[1740]: time="2025-04-30T00:38:32.818817905Z" level=info msg="StartContainer for \"e533e9cadce1c0f378fb5e797c7a970c1e6386c72cda8f55955696dfd75e1dae\"" Apr 30 00:38:32.852359 systemd[1]: Started cri-containerd-e533e9cadce1c0f378fb5e797c7a970c1e6386c72cda8f55955696dfd75e1dae.scope - libcontainer container e533e9cadce1c0f378fb5e797c7a970c1e6386c72cda8f55955696dfd75e1dae. Apr 30 00:38:32.883874 containerd[1740]: time="2025-04-30T00:38:32.883232572Z" level=info msg="StartContainer for \"e533e9cadce1c0f378fb5e797c7a970c1e6386c72cda8f55955696dfd75e1dae\" returns successfully" Apr 30 00:38:33.083985 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:38:33.084105 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:38:33.484871 kubelet[3217]: I0430 00:38:33.484807 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rdbwz" podStartSLOduration=2.127870776 podStartE2EDuration="16.484790022s" podCreationTimestamp="2025-04-30 00:38:17 +0000 UTC" firstStartedPulling="2025-04-30 00:38:18.405694756 +0000 UTC m=+24.206750223" lastFinishedPulling="2025-04-30 00:38:32.762614002 +0000 UTC m=+38.563669469" observedRunningTime="2025-04-30 00:38:33.484753822 +0000 UTC m=+39.285809249" watchObservedRunningTime="2025-04-30 00:38:33.484790022 +0000 UTC m=+39.285845449" Apr 30 00:38:39.309068 containerd[1740]: time="2025-04-30T00:38:39.308913013Z" level=info msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" Apr 30 00:38:39.310604 containerd[1740]: time="2025-04-30T00:38:39.309232813Z" level=info msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.382 [INFO][4598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.383 [INFO][4598] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" iface="eth0" netns="/var/run/netns/cni-b44bb881-8c40-9ead-c2a9-9ec4c48cd7bb" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.384 [INFO][4598] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" iface="eth0" netns="/var/run/netns/cni-b44bb881-8c40-9ead-c2a9-9ec4c48cd7bb" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.384 [INFO][4598] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" iface="eth0" netns="/var/run/netns/cni-b44bb881-8c40-9ead-c2a9-9ec4c48cd7bb" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.384 [INFO][4598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.384 [INFO][4598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.408 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.408 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.408 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.421 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.421 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.423 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:39.426881 containerd[1740]: 2025-04-30 00:38:39.425 [INFO][4598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:39.427603 containerd[1740]: time="2025-04-30T00:38:39.427442146Z" level=info msg="TearDown network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" successfully" Apr 30 00:38:39.427603 containerd[1740]: time="2025-04-30T00:38:39.427470946Z" level=info msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" returns successfully" Apr 30 00:38:39.429981 containerd[1740]: time="2025-04-30T00:38:39.429644107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854657b6f6-9ld68,Uid:538a6392-809d-4846-83ca-90e20dd564a7,Namespace:calico-system,Attempt:1,}" Apr 30 00:38:39.431616 systemd[1]: run-netns-cni\x2db44bb881\x2d8c40\x2d9ead\x2dc2a9\x2d9ec4c48cd7bb.mount: Deactivated successfully. Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.390 [INFO][4599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.392 [INFO][4599] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" iface="eth0" netns="/var/run/netns/cni-621f3a6e-636c-0431-2bb3-467287064f09" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.392 [INFO][4599] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" iface="eth0" netns="/var/run/netns/cni-621f3a6e-636c-0431-2bb3-467287064f09" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.392 [INFO][4599] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" iface="eth0" netns="/var/run/netns/cni-621f3a6e-636c-0431-2bb3-467287064f09" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.392 [INFO][4599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.392 [INFO][4599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.424 [INFO][4617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.424 [INFO][4617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.424 [INFO][4617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.435 [WARNING][4617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.435 [INFO][4617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.443 [INFO][4617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:39.446316 containerd[1740]: 2025-04-30 00:38:39.445 [INFO][4599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:39.448234 containerd[1740]: time="2025-04-30T00:38:39.447358275Z" level=info msg="TearDown network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" successfully" Apr 30 00:38:39.448234 containerd[1740]: time="2025-04-30T00:38:39.447385235Z" level=info msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" returns successfully" Apr 30 00:38:39.448681 systemd[1]: run-netns-cni\x2d621f3a6e\x2d636c\x2d0431\x2d2bb3\x2d467287064f09.mount: Deactivated successfully. Apr 30 00:38:39.449926 containerd[1740]: time="2025-04-30T00:38:39.448716836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ppvf,Uid:fb2e93f5-34f8-40e2-8427-80d1c7db355a,Namespace:calico-system,Attempt:1,}" Apr 30 00:38:39.640784 systemd-networkd[1501]: calicc223f17b57: Link UP Apr 30 00:38:39.641013 systemd-networkd[1501]: calicc223f17b57: Gained carrier Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.516 [INFO][4626] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.538 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0 calico-kube-controllers-854657b6f6- calico-system 538a6392-809d-4846-83ca-90e20dd564a7 764 0 2025-04-30 00:38:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:854657b6f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd calico-kube-controllers-854657b6f6-9ld68 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc223f17b57 [] []}} ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.538 [INFO][4626] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.575 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" HandleID="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.590 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" HandleID="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"calico-kube-controllers-854657b6f6-9ld68", "timestamp":"2025-04-30 00:38:39.575116333 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.590 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.590 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.590 [INFO][4650] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.591 [INFO][4650] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.596 [INFO][4650] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.600 [INFO][4650] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.602 [INFO][4650] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.603 [INFO][4650] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.603 [INFO][4650] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.605 [INFO][4650] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60 Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.609 [INFO][4650] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4650] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.193/26] block=192.168.88.192/26 handle="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4650] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.193/26] handle="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:39.670192 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.193/26] IPv6=[] ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" HandleID="k8s-pod-network.d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.620 [INFO][4626] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0", GenerateName:"calico-kube-controllers-854657b6f6-", Namespace:"calico-system", SelfLink:"", UID:"538a6392-809d-4846-83ca-90e20dd564a7", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854657b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"calico-kube-controllers-854657b6f6-9ld68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc223f17b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.620 [INFO][4626] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.193/32] ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.620 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc223f17b57 ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.638 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.639 [INFO][4626] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0", GenerateName:"calico-kube-controllers-854657b6f6-", Namespace:"calico-system", SelfLink:"", UID:"538a6392-809d-4846-83ca-90e20dd564a7", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854657b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60", Pod:"calico-kube-controllers-854657b6f6-9ld68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc223f17b57", MAC:"42:96:b6:a4:9b:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:39.670777 containerd[1740]: 2025-04-30 00:38:39.668 [INFO][4626] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60" Namespace="calico-system" Pod="calico-kube-controllers-854657b6f6-9ld68" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:39.679685 systemd-networkd[1501]: calid60038a92c3: Link UP Apr 30 00:38:39.680939 systemd-networkd[1501]: calid60038a92c3: Gained carrier Apr 30 00:38:39.705174 containerd[1740]: time="2025-04-30T00:38:39.703553990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:39.705174 containerd[1740]: time="2025-04-30T00:38:39.703614270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:39.705174 containerd[1740]: time="2025-04-30T00:38:39.703628710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:39.705174 containerd[1740]: time="2025-04-30T00:38:39.703707831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.532 [INFO][4636] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.548 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0 csi-node-driver- calico-system fb2e93f5-34f8-40e2-8427-80d1c7db355a 765 0 2025-04-30 00:38:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd csi-node-driver-7ppvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid60038a92c3 [] []}} ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.548 [INFO][4636] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.585 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" HandleID="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.597 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" HandleID="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d4b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"csi-node-driver-7ppvf", "timestamp":"2025-04-30 00:38:39.585071417 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.597 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.618 [INFO][4655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.620 [INFO][4655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.624 [INFO][4655] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.628 [INFO][4655] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.629 [INFO][4655] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.631 [INFO][4655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.631 [INFO][4655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.632 [INFO][4655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.642 [INFO][4655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.669 [INFO][4655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.194/26] block=192.168.88.192/26 handle="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.669 [INFO][4655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.194/26] handle="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.670 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:39.726114 containerd[1740]: 2025-04-30 00:38:39.670 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.194/26] IPv6=[] ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" HandleID="k8s-pod-network.591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.674 [INFO][4636] cni-plugin/k8s.go 386: Populated endpoint ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb2e93f5-34f8-40e2-8427-80d1c7db355a", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"csi-node-driver-7ppvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid60038a92c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.675 [INFO][4636] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.194/32] ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.675 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid60038a92c3 ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.680 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.681 [INFO][4636] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb2e93f5-34f8-40e2-8427-80d1c7db355a", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e", Pod:"csi-node-driver-7ppvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid60038a92c3", MAC:"aa:c3:02:4b:bf:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:39.726669 containerd[1740]: 2025-04-30 00:38:39.720 [INFO][4636] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e" Namespace="calico-system" Pod="csi-node-driver-7ppvf" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:39.735346 systemd[1]: Started cri-containerd-d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60.scope - libcontainer container d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60. Apr 30 00:38:39.763223 containerd[1740]: time="2025-04-30T00:38:39.761882097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:39.763223 containerd[1740]: time="2025-04-30T00:38:39.761931377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:39.763223 containerd[1740]: time="2025-04-30T00:38:39.761946017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:39.763223 containerd[1740]: time="2025-04-30T00:38:39.762011017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:39.781364 systemd[1]: Started cri-containerd-591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e.scope - libcontainer container 591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e. Apr 30 00:38:39.812382 containerd[1740]: time="2025-04-30T00:38:39.812341319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7ppvf,Uid:fb2e93f5-34f8-40e2-8427-80d1c7db355a,Namespace:calico-system,Attempt:1,} returns sandbox id \"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e\"" Apr 30 00:38:39.816087 containerd[1740]: time="2025-04-30T00:38:39.815768841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:38:39.830914 containerd[1740]: time="2025-04-30T00:38:39.830812528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854657b6f6-9ld68,Uid:538a6392-809d-4846-83ca-90e20dd564a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60\"" Apr 30 00:38:41.140344 systemd-networkd[1501]: calid60038a92c3: Gained IPv6LL Apr 30 00:38:41.308348 containerd[1740]: time="2025-04-30T00:38:41.308299714Z" level=info msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" Apr 30 00:38:41.309545 containerd[1740]: time="2025-04-30T00:38:41.309510354Z" level=info msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" Apr 30 00:38:41.396373 systemd-networkd[1501]: calicc223f17b57: Gained IPv6LL Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.377 [INFO][4824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.377 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" iface="eth0" netns="/var/run/netns/cni-5ae8c526-0c62-eaae-cd3c-fee5c316fed5" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.377 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" iface="eth0" netns="/var/run/netns/cni-5ae8c526-0c62-eaae-cd3c-fee5c316fed5" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.379 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" iface="eth0" netns="/var/run/netns/cni-5ae8c526-0c62-eaae-cd3c-fee5c316fed5" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.379 [INFO][4824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.379 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.431 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.432 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.432 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.458 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.459 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.463 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:41.480363 containerd[1740]: 2025-04-30 00:38:41.472 [INFO][4824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:41.484179 containerd[1740]: time="2025-04-30T00:38:41.482270512Z" level=info msg="TearDown network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" successfully" Apr 30 00:38:41.484179 containerd[1740]: time="2025-04-30T00:38:41.482475552Z" level=info msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" returns successfully" Apr 30 00:38:41.485418 systemd[1]: run-netns-cni\x2d5ae8c526\x2d0c62\x2deaae\x2dcd3c\x2dfee5c316fed5.mount: Deactivated successfully. Apr 30 00:38:41.485859 containerd[1740]: time="2025-04-30T00:38:41.485523673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-jg8pz,Uid:d52508d6-cb87-4a3e-bc62-6a667d5c126a,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.406 [INFO][4823] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.408 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" iface="eth0" netns="/var/run/netns/cni-2e6f4f5b-611f-c5d3-6bde-0a6a08c51035" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.408 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" iface="eth0" netns="/var/run/netns/cni-2e6f4f5b-611f-c5d3-6bde-0a6a08c51035" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.410 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" iface="eth0" netns="/var/run/netns/cni-2e6f4f5b-611f-c5d3-6bde-0a6a08c51035" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.410 [INFO][4823] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.410 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.485 [INFO][4858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.486 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.486 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.500 [WARNING][4858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.500 [INFO][4858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.502 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:41.512830 containerd[1740]: 2025-04-30 00:38:41.506 [INFO][4823] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:41.516318 containerd[1740]: time="2025-04-30T00:38:41.514000446Z" level=info msg="TearDown network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" successfully" Apr 30 00:38:41.516318 containerd[1740]: time="2025-04-30T00:38:41.514035886Z" level=info msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" returns successfully" Apr 30 00:38:41.517421 containerd[1740]: time="2025-04-30T00:38:41.517391928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2wb9,Uid:bc57709c-30bf-43ea-8c23-9eaa31163a6e,Namespace:kube-system,Attempt:1,}" Apr 30 00:38:41.518110 systemd[1]: run-netns-cni\x2d2e6f4f5b\x2d611f\x2dc5d3\x2d6bde\x2d0a6a08c51035.mount: Deactivated successfully. Apr 30 00:38:41.686647 containerd[1740]: time="2025-04-30T00:38:41.684616483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:41.716196 containerd[1740]: time="2025-04-30T00:38:41.691434206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:38:41.716196 containerd[1740]: time="2025-04-30T00:38:41.696083328Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:41.714972 systemd-networkd[1501]: calif83e39b3c78: Link UP Apr 30 00:38:41.716383 containerd[1740]: time="2025-04-30T00:38:41.705095372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.888856131s" Apr 30 00:38:41.716383 containerd[1740]: time="2025-04-30T00:38:41.716300337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:38:41.715922 systemd-networkd[1501]: calif83e39b3c78: Gained carrier Apr 30 00:38:41.717011 containerd[1740]: time="2025-04-30T00:38:41.716641978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:41.723731 containerd[1740]: time="2025-04-30T00:38:41.723537781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:38:41.727047 containerd[1740]: time="2025-04-30T00:38:41.726257462Z" level=info msg="CreateContainer within sandbox \"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.590 [INFO][4875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.605 [INFO][4875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0 calico-apiserver-68866747c9- calico-apiserver d52508d6-cb87-4a3e-bc62-6a667d5c126a 779 0 2025-04-30 00:38:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68866747c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd calico-apiserver-68866747c9-jg8pz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif83e39b3c78 [] []}} ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.605 [INFO][4875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.636 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" HandleID="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.648 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" HandleID="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bb330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"calico-apiserver-68866747c9-jg8pz", "timestamp":"2025-04-30 00:38:41.636715782 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.649 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.649 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.649 [INFO][4901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.651 [INFO][4901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.657 [INFO][4901] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.661 [INFO][4901] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.666 [INFO][4901] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.670 [INFO][4901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.670 [INFO][4901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.672 [INFO][4901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.679 [INFO][4901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.694 [INFO][4901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.195/26] block=192.168.88.192/26 handle="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.695 [INFO][4901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.195/26] handle="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.696 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:41.745461 containerd[1740]: 2025-04-30 00:38:41.696 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.195/26] IPv6=[] ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" HandleID="k8s-pod-network.53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.708 [INFO][4875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d52508d6-cb87-4a3e-bc62-6a667d5c126a", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"calico-apiserver-68866747c9-jg8pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif83e39b3c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.708 [INFO][4875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.195/32] ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.708 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif83e39b3c78 ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.712 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.714 [INFO][4875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d52508d6-cb87-4a3e-bc62-6a667d5c126a", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a", Pod:"calico-apiserver-68866747c9-jg8pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif83e39b3c78", MAC:"ae:6a:4d:8b:83:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:41.746529 containerd[1740]: 2025-04-30 00:38:41.741 [INFO][4875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-jg8pz" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:41.788300 containerd[1740]: time="2025-04-30T00:38:41.788149730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:41.788300 containerd[1740]: time="2025-04-30T00:38:41.788256970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:41.788560 containerd[1740]: time="2025-04-30T00:38:41.788281810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:41.788560 containerd[1740]: time="2025-04-30T00:38:41.788370490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:41.798951 containerd[1740]: time="2025-04-30T00:38:41.798696615Z" level=info msg="CreateContainer within sandbox \"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"346efd43afb0416619dbad1ad012eb95748f54bb63d5884f66c19dbcc5cc82d1\"" Apr 30 00:38:41.802585 containerd[1740]: time="2025-04-30T00:38:41.802133896Z" level=info msg="StartContainer for \"346efd43afb0416619dbad1ad012eb95748f54bb63d5884f66c19dbcc5cc82d1\"" Apr 30 00:38:41.807319 systemd[1]: Started cri-containerd-53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a.scope - libcontainer container 53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a. Apr 30 00:38:41.809435 systemd-networkd[1501]: cali6be28499572: Link UP Apr 30 00:38:41.811137 systemd-networkd[1501]: cali6be28499572: Gained carrier Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.650 [INFO][4892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.665 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0 coredns-7db6d8ff4d- kube-system bc57709c-30bf-43ea-8c23-9eaa31163a6e 780 0 2025-04-30 00:38:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd coredns-7db6d8ff4d-v2wb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6be28499572 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.666 [INFO][4892] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.719 [INFO][4912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" HandleID="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.744 [INFO][4912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" HandleID="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"coredns-7db6d8ff4d-v2wb9", "timestamp":"2025-04-30 00:38:41.719770219 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.745 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.745 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.745 [INFO][4912] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.747 [INFO][4912] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.753 [INFO][4912] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.760 [INFO][4912] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.763 [INFO][4912] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.765 [INFO][4912] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.765 [INFO][4912] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.768 [INFO][4912] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.775 [INFO][4912] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.796 [INFO][4912] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.196/26] block=192.168.88.192/26 handle="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.796 [INFO][4912] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.196/26] handle="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.797 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:41.833784 containerd[1740]: 2025-04-30 00:38:41.797 [INFO][4912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.196/26] IPv6=[] ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" HandleID="k8s-pod-network.24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.804 [INFO][4892] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc57709c-30bf-43ea-8c23-9eaa31163a6e", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"coredns-7db6d8ff4d-v2wb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6be28499572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.804 [INFO][4892] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.196/32] ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.804 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6be28499572 ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.812 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.812 [INFO][4892] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc57709c-30bf-43ea-8c23-9eaa31163a6e", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f", Pod:"coredns-7db6d8ff4d-v2wb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6be28499572", MAC:"e6:15:62:fc:49:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:41.834352 containerd[1740]: 2025-04-30 00:38:41.830 [INFO][4892] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-v2wb9" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:41.855333 systemd[1]: Started cri-containerd-346efd43afb0416619dbad1ad012eb95748f54bb63d5884f66c19dbcc5cc82d1.scope - libcontainer container 346efd43afb0416619dbad1ad012eb95748f54bb63d5884f66c19dbcc5cc82d1. Apr 30 00:38:41.870619 containerd[1740]: time="2025-04-30T00:38:41.870322927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:41.870619 containerd[1740]: time="2025-04-30T00:38:41.870378047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:41.870619 containerd[1740]: time="2025-04-30T00:38:41.870392847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:41.871860 containerd[1740]: time="2025-04-30T00:38:41.870460367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:41.873644 containerd[1740]: time="2025-04-30T00:38:41.873617608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-jg8pz,Uid:d52508d6-cb87-4a3e-bc62-6a667d5c126a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a\"" Apr 30 00:38:41.892377 systemd[1]: Started cri-containerd-24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f.scope - libcontainer container 24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f. Apr 30 00:38:41.905536 containerd[1740]: time="2025-04-30T00:38:41.905062062Z" level=info msg="StartContainer for \"346efd43afb0416619dbad1ad012eb95748f54bb63d5884f66c19dbcc5cc82d1\" returns successfully" Apr 30 00:38:41.932552 containerd[1740]: time="2025-04-30T00:38:41.932518755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v2wb9,Uid:bc57709c-30bf-43ea-8c23-9eaa31163a6e,Namespace:kube-system,Attempt:1,} returns sandbox id \"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f\"" Apr 30 00:38:41.937171 containerd[1740]: time="2025-04-30T00:38:41.936894837Z" level=info msg="CreateContainer within sandbox \"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:38:41.995113 containerd[1740]: time="2025-04-30T00:38:41.994988823Z" level=info msg="CreateContainer within sandbox \"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b09e100784ead3d7fef16b435cacd8f5bd91c45bb5b0ff9bf462f7830ca8b26b\"" Apr 30 00:38:41.996459 containerd[1740]: time="2025-04-30T00:38:41.995628183Z" level=info msg="StartContainer for \"b09e100784ead3d7fef16b435cacd8f5bd91c45bb5b0ff9bf462f7830ca8b26b\"" Apr 30 00:38:42.019318 systemd[1]: Started cri-containerd-b09e100784ead3d7fef16b435cacd8f5bd91c45bb5b0ff9bf462f7830ca8b26b.scope - libcontainer container b09e100784ead3d7fef16b435cacd8f5bd91c45bb5b0ff9bf462f7830ca8b26b. Apr 30 00:38:42.055619 containerd[1740]: time="2025-04-30T00:38:42.055581250Z" level=info msg="StartContainer for \"b09e100784ead3d7fef16b435cacd8f5bd91c45bb5b0ff9bf462f7830ca8b26b\" returns successfully" Apr 30 00:38:42.511545 kubelet[3217]: I0430 00:38:42.511482 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v2wb9" podStartSLOduration=33.511456256 podStartE2EDuration="33.511456256s" podCreationTimestamp="2025-04-30 00:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:42.510503575 +0000 UTC m=+48.311559042" watchObservedRunningTime="2025-04-30 00:38:42.511456256 +0000 UTC m=+48.312511723" Apr 30 00:38:42.932599 systemd-networkd[1501]: calif83e39b3c78: Gained IPv6LL Apr 30 00:38:43.308246 containerd[1740]: time="2025-04-30T00:38:43.308206495Z" level=info msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" Apr 30 00:38:43.316296 systemd-networkd[1501]: cali6be28499572: Gained IPv6LL Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.376 [INFO][5137] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.376 [INFO][5137] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" iface="eth0" netns="/var/run/netns/cni-730c226a-e6bd-86ad-78b8-4e823fd882b8" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.378 [INFO][5137] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" iface="eth0" netns="/var/run/netns/cni-730c226a-e6bd-86ad-78b8-4e823fd882b8" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.379 [INFO][5137] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" iface="eth0" netns="/var/run/netns/cni-730c226a-e6bd-86ad-78b8-4e823fd882b8" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.379 [INFO][5137] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.379 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.404 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.405 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.405 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.414 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.414 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.416 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:43.420668 containerd[1740]: 2025-04-30 00:38:43.418 [INFO][5137] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:43.422140 containerd[1740]: time="2025-04-30T00:38:43.422104826Z" level=info msg="TearDown network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" successfully" Apr 30 00:38:43.422317 containerd[1740]: time="2025-04-30T00:38:43.422248186Z" level=info msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" returns successfully" Apr 30 00:38:43.426503 systemd[1]: run-netns-cni\x2d730c226a\x2de6bd\x2d86ad\x2d78b8\x2d4e823fd882b8.mount: Deactivated successfully. Apr 30 00:38:43.429357 containerd[1740]: time="2025-04-30T00:38:43.429321509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhxlm,Uid:39b31d14-adbd-40cc-aaae-630914635b7c,Namespace:kube-system,Attempt:1,}" Apr 30 00:38:44.085202 kubelet[3217]: I0430 00:38:44.085131 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:44.204216 containerd[1740]: time="2025-04-30T00:38:44.200533118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:44.206805 containerd[1740]: time="2025-04-30T00:38:44.206761681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:38:44.226262 containerd[1740]: time="2025-04-30T00:38:44.226141489Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:44.237668 containerd[1740]: time="2025-04-30T00:38:44.237501174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:44.239134 containerd[1740]: time="2025-04-30T00:38:44.239081574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.515508113s" Apr 30 00:38:44.239335 containerd[1740]: time="2025-04-30T00:38:44.239188334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:38:44.242875 containerd[1740]: time="2025-04-30T00:38:44.242776056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:38:44.264712 containerd[1740]: time="2025-04-30T00:38:44.264297465Z" level=info msg="CreateContainer within sandbox \"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:38:44.310833 containerd[1740]: time="2025-04-30T00:38:44.310797645Z" level=info msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" Apr 30 00:38:44.316279 containerd[1740]: time="2025-04-30T00:38:44.315791247Z" level=info msg="CreateContainer within sandbox \"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff\"" Apr 30 00:38:44.318910 containerd[1740]: time="2025-04-30T00:38:44.317084847Z" level=info msg="StartContainer for \"4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff\"" Apr 30 00:38:44.350317 systemd[1]: Started cri-containerd-4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff.scope - libcontainer container 4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff. Apr 30 00:38:44.406832 containerd[1740]: time="2025-04-30T00:38:44.406718325Z" level=info msg="StartContainer for \"4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff\" returns successfully" Apr 30 00:38:44.479936 systemd-networkd[1501]: cali6da6b2cf131: Link UP Apr 30 00:38:44.481201 systemd-networkd[1501]: cali6da6b2cf131: Gained carrier Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.430 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.430 [INFO][5221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" iface="eth0" netns="/var/run/netns/cni-50be3959-176a-b37a-faa2-9510728e970d" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.430 [INFO][5221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" iface="eth0" netns="/var/run/netns/cni-50be3959-176a-b37a-faa2-9510728e970d" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.431 [INFO][5221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" iface="eth0" netns="/var/run/netns/cni-50be3959-176a-b37a-faa2-9510728e970d" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.431 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.431 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.454 [INFO][5250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.454 [INFO][5250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.462 [INFO][5250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.481 [WARNING][5250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.481 [INFO][5250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.484 [INFO][5250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:44.498525 containerd[1740]: 2025-04-30 00:38:44.492 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:44.500550 containerd[1740]: time="2025-04-30T00:38:44.499415364Z" level=info msg="TearDown network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" successfully" Apr 30 00:38:44.500550 containerd[1740]: time="2025-04-30T00:38:44.499455684Z" level=info msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" returns successfully" Apr 30 00:38:44.500550 containerd[1740]: time="2025-04-30T00:38:44.500400884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-tfllp,Uid:ebe487b6-7b72-4107-a215-c47c7bf75a1e,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.235 [INFO][5172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.267 [INFO][5172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0 coredns-7db6d8ff4d- kube-system 39b31d14-adbd-40cc-aaae-630914635b7c 809 0 2025-04-30 00:38:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd coredns-7db6d8ff4d-xhxlm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6da6b2cf131 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.267 [INFO][5172] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.299 [INFO][5189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" HandleID="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.318 [INFO][5189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" HandleID="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"coredns-7db6d8ff4d-xhxlm", "timestamp":"2025-04-30 00:38:44.29976096 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.318 [INFO][5189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.318 [INFO][5189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.318 [INFO][5189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.326 [INFO][5189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.332 [INFO][5189] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.340 [INFO][5189] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.342 [INFO][5189] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.386 [INFO][5189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.387 [INFO][5189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.390 [INFO][5189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7 Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.445 [INFO][5189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.461 [INFO][5189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.197/26] block=192.168.88.192/26 handle="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.461 [INFO][5189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.197/26] handle="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.461 [INFO][5189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:44.520590 containerd[1740]: 2025-04-30 00:38:44.461 [INFO][5189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.197/26] IPv6=[] ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" HandleID="k8s-pod-network.083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.464 [INFO][5172] cni-plugin/k8s.go 386: Populated endpoint ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39b31d14-adbd-40cc-aaae-630914635b7c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"coredns-7db6d8ff4d-xhxlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6da6b2cf131", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.465 [INFO][5172] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.197/32] ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.466 [INFO][5172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6da6b2cf131 ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.481 [INFO][5172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.483 [INFO][5172] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39b31d14-adbd-40cc-aaae-630914635b7c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7", Pod:"coredns-7db6d8ff4d-xhxlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6da6b2cf131", MAC:"2a:01:7a:0f:86:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:44.521145 containerd[1740]: 2025-04-30 00:38:44.507 [INFO][5172] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xhxlm" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:44.594128 containerd[1740]: time="2025-04-30T00:38:44.593687363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:44.594128 containerd[1740]: time="2025-04-30T00:38:44.593850523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:44.594128 containerd[1740]: time="2025-04-30T00:38:44.593879603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:44.594128 containerd[1740]: time="2025-04-30T00:38:44.593977163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:44.617480 systemd[1]: Started cri-containerd-083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7.scope - libcontainer container 083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7. Apr 30 00:38:44.654691 kubelet[3217]: I0430 00:38:44.654408 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-854657b6f6-9ld68" podStartSLOduration=22.244519302 podStartE2EDuration="26.654389789s" podCreationTimestamp="2025-04-30 00:38:18 +0000 UTC" firstStartedPulling="2025-04-30 00:38:39.832447129 +0000 UTC m=+45.633502596" lastFinishedPulling="2025-04-30 00:38:44.242317616 +0000 UTC m=+50.043373083" observedRunningTime="2025-04-30 00:38:44.558600149 +0000 UTC m=+50.359655616" watchObservedRunningTime="2025-04-30 00:38:44.654389789 +0000 UTC m=+50.455445256" Apr 30 00:38:44.712895 containerd[1740]: time="2025-04-30T00:38:44.712784613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhxlm,Uid:39b31d14-adbd-40cc-aaae-630914635b7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7\"" Apr 30 00:38:44.719056 containerd[1740]: time="2025-04-30T00:38:44.718904696Z" level=info msg="CreateContainer within sandbox \"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:38:44.768973 containerd[1740]: time="2025-04-30T00:38:44.768923797Z" level=info msg="CreateContainer within sandbox \"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cff640a3b236ca10ee09b01122f020aa8acd3ac54101c3b4bdbe375703f30c58\"" Apr 30 00:38:44.769877 containerd[1740]: time="2025-04-30T00:38:44.769500717Z" level=info msg="StartContainer for \"cff640a3b236ca10ee09b01122f020aa8acd3ac54101c3b4bdbe375703f30c58\"" Apr 30 00:38:44.798954 systemd-networkd[1501]: calica849921580: Link UP Apr 30 00:38:44.801897 systemd-networkd[1501]: calica849921580: Gained carrier Apr 30 00:38:44.827911 systemd[1]: Started cri-containerd-cff640a3b236ca10ee09b01122f020aa8acd3ac54101c3b4bdbe375703f30c58.scope - libcontainer container cff640a3b236ca10ee09b01122f020aa8acd3ac54101c3b4bdbe375703f30c58. Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.655 [INFO][5317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.678 [INFO][5317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0 calico-apiserver-68866747c9- calico-apiserver ebe487b6-7b72-4107-a215-c47c7bf75a1e 820 0 2025-04-30 00:38:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68866747c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-8ba35441fd calico-apiserver-68866747c9-tfllp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica849921580 [] []}} ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.678 [INFO][5317] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.731 [INFO][5348] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" HandleID="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.744 [INFO][5348] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" HandleID="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-8ba35441fd", "pod":"calico-apiserver-68866747c9-tfllp", "timestamp":"2025-04-30 00:38:44.731699461 +0000 UTC"}, Hostname:"ci-4081.3.3-a-8ba35441fd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.744 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.744 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.744 [INFO][5348] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-8ba35441fd' Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.747 [INFO][5348] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.750 [INFO][5348] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.756 [INFO][5348] ipam/ipam.go 489: Trying affinity for 192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.757 [INFO][5348] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.760 [INFO][5348] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.760 [INFO][5348] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.764 [INFO][5348] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349 Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.770 [INFO][5348] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.786 [INFO][5348] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.198/26] block=192.168.88.192/26 handle="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.786 [INFO][5348] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.198/26] handle="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" host="ci-4081.3.3-a-8ba35441fd" Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.786 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:44.830637 containerd[1740]: 2025-04-30 00:38:44.786 [INFO][5348] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.198/26] IPv6=[] ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" HandleID="k8s-pod-network.077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.792 [INFO][5317] cni-plugin/k8s.go 386: Populated endpoint ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebe487b6-7b72-4107-a215-c47c7bf75a1e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"", Pod:"calico-apiserver-68866747c9-tfllp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica849921580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.792 [INFO][5317] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.198/32] ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.792 [INFO][5317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica849921580 ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.802 [INFO][5317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.802 [INFO][5317] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebe487b6-7b72-4107-a215-c47c7bf75a1e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349", Pod:"calico-apiserver-68866747c9-tfllp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica849921580", MAC:"9e:97:a4:00:89:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:44.831628 containerd[1740]: 2025-04-30 00:38:44.820 [INFO][5317] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349" Namespace="calico-apiserver" Pod="calico-apiserver-68866747c9-tfllp" WorkloadEndpoint="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:44.881092 containerd[1740]: time="2025-04-30T00:38:44.880740644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:44.881092 containerd[1740]: time="2025-04-30T00:38:44.880797404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:44.881092 containerd[1740]: time="2025-04-30T00:38:44.880811804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:44.881092 containerd[1740]: time="2025-04-30T00:38:44.880885804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:44.915339 systemd[1]: Started cri-containerd-077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349.scope - libcontainer container 077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349. Apr 30 00:38:44.918850 containerd[1740]: time="2025-04-30T00:38:44.918523740Z" level=info msg="StartContainer for \"cff640a3b236ca10ee09b01122f020aa8acd3ac54101c3b4bdbe375703f30c58\" returns successfully" Apr 30 00:38:44.992943 containerd[1740]: time="2025-04-30T00:38:44.992892571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68866747c9-tfllp,Uid:ebe487b6-7b72-4107-a215-c47c7bf75a1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349\"" Apr 30 00:38:45.123197 kernel: bpftool[5491]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:38:45.205093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751681124.mount: Deactivated successfully. Apr 30 00:38:45.205364 systemd[1]: run-netns-cni\x2d50be3959\x2d176a\x2db37a\x2dfaa2\x2d9510728e970d.mount: Deactivated successfully. Apr 30 00:38:45.373366 systemd-networkd[1501]: vxlan.calico: Link UP Apr 30 00:38:45.373373 systemd-networkd[1501]: vxlan.calico: Gained carrier Apr 30 00:38:45.549098 kubelet[3217]: I0430 00:38:45.548704 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xhxlm" podStartSLOduration=36.548684004 podStartE2EDuration="36.548684004s" podCreationTimestamp="2025-04-30 00:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:45.547921444 +0000 UTC m=+51.348976911" watchObservedRunningTime="2025-04-30 00:38:45.548684004 +0000 UTC m=+51.349739471" Apr 30 00:38:45.813334 systemd-networkd[1501]: cali6da6b2cf131: Gained IPv6LL Apr 30 00:38:46.260276 systemd-networkd[1501]: calica849921580: Gained IPv6LL Apr 30 00:38:47.092447 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Apr 30 00:38:48.329294 containerd[1740]: time="2025-04-30T00:38:48.329245172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.331622 containerd[1740]: time="2025-04-30T00:38:48.331478293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:38:48.336549 containerd[1740]: time="2025-04-30T00:38:48.336200215Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.340235 containerd[1740]: time="2025-04-30T00:38:48.340197656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:48.341012 containerd[1740]: time="2025-04-30T00:38:48.340977617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.098169841s" Apr 30 00:38:48.341012 containerd[1740]: time="2025-04-30T00:38:48.341009737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:38:48.343548 containerd[1740]: time="2025-04-30T00:38:48.343527498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:38:48.344525 containerd[1740]: time="2025-04-30T00:38:48.344211698Z" level=info msg="CreateContainer within sandbox \"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:38:48.390859 containerd[1740]: time="2025-04-30T00:38:48.390656518Z" level=info msg="CreateContainer within sandbox \"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"05ab772e3718106ec767705075c49a1b3d95987c6ebaf51db087e068a95318cc\"" Apr 30 00:38:48.391332 containerd[1740]: time="2025-04-30T00:38:48.391297918Z" level=info msg="StartContainer for \"05ab772e3718106ec767705075c49a1b3d95987c6ebaf51db087e068a95318cc\"" Apr 30 00:38:48.423363 systemd[1]: Started cri-containerd-05ab772e3718106ec767705075c49a1b3d95987c6ebaf51db087e068a95318cc.scope - libcontainer container 05ab772e3718106ec767705075c49a1b3d95987c6ebaf51db087e068a95318cc. Apr 30 00:38:48.457208 containerd[1740]: time="2025-04-30T00:38:48.457141345Z" level=info msg="StartContainer for \"05ab772e3718106ec767705075c49a1b3d95987c6ebaf51db087e068a95318cc\" returns successfully" Apr 30 00:38:48.557281 kubelet[3217]: I0430 00:38:48.556647 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68866747c9-jg8pz" podStartSLOduration=26.09179434 podStartE2EDuration="32.556630667s" podCreationTimestamp="2025-04-30 00:38:16 +0000 UTC" firstStartedPulling="2025-04-30 00:38:41.87738261 +0000 UTC m=+47.678438077" lastFinishedPulling="2025-04-30 00:38:48.342218937 +0000 UTC m=+54.143274404" observedRunningTime="2025-04-30 00:38:48.556089067 +0000 UTC m=+54.357144534" watchObservedRunningTime="2025-04-30 00:38:48.556630667 +0000 UTC m=+54.357686174" Apr 30 00:38:49.543482 kubelet[3217]: I0430 00:38:49.543446 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:49.832730 containerd[1740]: time="2025-04-30T00:38:49.832259643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:49.835466 containerd[1740]: time="2025-04-30T00:38:49.835203924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:38:49.842794 containerd[1740]: time="2025-04-30T00:38:49.841962327Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:49.847520 containerd[1740]: time="2025-04-30T00:38:49.847464449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:49.848657 containerd[1740]: time="2025-04-30T00:38:49.848616850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.504347992s" Apr 30 00:38:49.848657 containerd[1740]: time="2025-04-30T00:38:49.848651890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:38:49.852642 containerd[1740]: time="2025-04-30T00:38:49.852617971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:38:49.854359 containerd[1740]: time="2025-04-30T00:38:49.854322172Z" level=info msg="CreateContainer within sandbox \"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:38:49.919177 containerd[1740]: time="2025-04-30T00:38:49.919013359Z" level=info msg="CreateContainer within sandbox \"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"20b8ef35e34ecdd1e187e71f73d7b39aa7562a2e453ce75c4d1b031d42d982fc\"" Apr 30 00:38:49.919650 containerd[1740]: time="2025-04-30T00:38:49.919591999Z" level=info msg="StartContainer for \"20b8ef35e34ecdd1e187e71f73d7b39aa7562a2e453ce75c4d1b031d42d982fc\"" Apr 30 00:38:49.957299 systemd[1]: Started cri-containerd-20b8ef35e34ecdd1e187e71f73d7b39aa7562a2e453ce75c4d1b031d42d982fc.scope - libcontainer container 20b8ef35e34ecdd1e187e71f73d7b39aa7562a2e453ce75c4d1b031d42d982fc. Apr 30 00:38:49.990398 containerd[1740]: time="2025-04-30T00:38:49.990178949Z" level=info msg="StartContainer for \"20b8ef35e34ecdd1e187e71f73d7b39aa7562a2e453ce75c4d1b031d42d982fc\" returns successfully" Apr 30 00:38:50.233047 containerd[1740]: time="2025-04-30T00:38:50.232332331Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:50.234461 containerd[1740]: time="2025-04-30T00:38:50.234435852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:38:50.236784 containerd[1740]: time="2025-04-30T00:38:50.236748253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 384.017082ms" Apr 30 00:38:50.236867 containerd[1740]: time="2025-04-30T00:38:50.236785413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:38:50.240377 containerd[1740]: time="2025-04-30T00:38:50.240035694Z" level=info msg="CreateContainer within sandbox \"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:38:50.286127 containerd[1740]: time="2025-04-30T00:38:50.286090953Z" level=info msg="CreateContainer within sandbox \"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aed74b4bf3f22e7c4ce65321f8e4017713fbf9b8a30abd0960145a9130a07e73\"" Apr 30 00:38:50.289418 containerd[1740]: time="2025-04-30T00:38:50.287573194Z" level=info msg="StartContainer for \"aed74b4bf3f22e7c4ce65321f8e4017713fbf9b8a30abd0960145a9130a07e73\"" Apr 30 00:38:50.315324 systemd[1]: Started cri-containerd-aed74b4bf3f22e7c4ce65321f8e4017713fbf9b8a30abd0960145a9130a07e73.scope - libcontainer container aed74b4bf3f22e7c4ce65321f8e4017713fbf9b8a30abd0960145a9130a07e73. Apr 30 00:38:50.350756 containerd[1740]: time="2025-04-30T00:38:50.350704660Z" level=info msg="StartContainer for \"aed74b4bf3f22e7c4ce65321f8e4017713fbf9b8a30abd0960145a9130a07e73\" returns successfully" Apr 30 00:38:50.428369 kubelet[3217]: I0430 00:38:50.428331 3217 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:38:50.435474 kubelet[3217]: I0430 00:38:50.434883 3217 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:38:50.593484 kubelet[3217]: I0430 00:38:50.593353 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7ppvf" podStartSLOduration=22.558299153 podStartE2EDuration="32.593332602s" podCreationTimestamp="2025-04-30 00:38:18 +0000 UTC" firstStartedPulling="2025-04-30 00:38:39.815027841 +0000 UTC m=+45.616083308" lastFinishedPulling="2025-04-30 00:38:49.85006129 +0000 UTC m=+55.651116757" observedRunningTime="2025-04-30 00:38:50.569062952 +0000 UTC m=+56.370118419" watchObservedRunningTime="2025-04-30 00:38:50.593332602 +0000 UTC m=+56.394388069" Apr 30 00:38:51.307185 kubelet[3217]: I0430 00:38:51.306595 3217 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:38:51.338268 kubelet[3217]: I0430 00:38:51.338206 3217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68866747c9-tfllp" podStartSLOduration=30.095167474 podStartE2EDuration="35.338188355s" podCreationTimestamp="2025-04-30 00:38:16 +0000 UTC" firstStartedPulling="2025-04-30 00:38:44.994473532 +0000 UTC m=+50.795528999" lastFinishedPulling="2025-04-30 00:38:50.237494413 +0000 UTC m=+56.038549880" observedRunningTime="2025-04-30 00:38:50.595298243 +0000 UTC m=+56.396353710" watchObservedRunningTime="2025-04-30 00:38:51.338188355 +0000 UTC m=+57.139243862" Apr 30 00:38:54.307846 containerd[1740]: time="2025-04-30T00:38:54.307781429Z" level=info msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.349 [WARNING][5743] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc57709c-30bf-43ea-8c23-9eaa31163a6e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f", Pod:"coredns-7db6d8ff4d-v2wb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6be28499572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.349 [INFO][5743] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.349 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" iface="eth0" netns="" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.349 [INFO][5743] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.349 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.368 [INFO][5751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.368 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.368 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.378 [WARNING][5751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.378 [INFO][5751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.380 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.388346 containerd[1740]: 2025-04-30 00:38:54.384 [INFO][5743] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.388749 containerd[1740]: time="2025-04-30T00:38:54.388384469Z" level=info msg="TearDown network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" successfully" Apr 30 00:38:54.388749 containerd[1740]: time="2025-04-30T00:38:54.388406189Z" level=info msg="StopPodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" returns successfully" Apr 30 00:38:54.389273 containerd[1740]: time="2025-04-30T00:38:54.389144830Z" level=info msg="RemovePodSandbox for \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" Apr 30 00:38:54.389355 containerd[1740]: time="2025-04-30T00:38:54.389316390Z" level=info msg="Forcibly stopping sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\"" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.428 [WARNING][5770] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc57709c-30bf-43ea-8c23-9eaa31163a6e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"24919e1f81de4da20e19721d57791003e82609026eb320d2bfac43fb5769097f", Pod:"coredns-7db6d8ff4d-v2wb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6be28499572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.429 [INFO][5770] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.429 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" iface="eth0" netns="" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.429 [INFO][5770] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.429 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.447 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.447 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.447 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.457 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.458 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" HandleID="k8s-pod-network.b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--v2wb9-eth0" Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.459 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.465033 containerd[1740]: 2025-04-30 00:38:54.462 [INFO][5770] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be" Apr 30 00:38:54.465591 containerd[1740]: time="2025-04-30T00:38:54.465051508Z" level=info msg="TearDown network for sandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" successfully" Apr 30 00:38:54.473402 containerd[1740]: time="2025-04-30T00:38:54.473365072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:54.473507 containerd[1740]: time="2025-04-30T00:38:54.473437632Z" level=info msg="RemovePodSandbox \"b195b61dcb852a25406bea9e093d8fb04e1755d7b27a5c9c7fda6cb8cd2952be\" returns successfully" Apr 30 00:38:54.475043 containerd[1740]: time="2025-04-30T00:38:54.474491673Z" level=info msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.512 [WARNING][5795] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0", GenerateName:"calico-kube-controllers-854657b6f6-", Namespace:"calico-system", SelfLink:"", UID:"538a6392-809d-4846-83ca-90e20dd564a7", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854657b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60", Pod:"calico-kube-controllers-854657b6f6-9ld68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc223f17b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.512 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.512 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" iface="eth0" netns="" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.512 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.512 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.539 [INFO][5803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.539 [INFO][5803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.539 [INFO][5803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.547 [WARNING][5803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.547 [INFO][5803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.548 [INFO][5803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.551000 containerd[1740]: 2025-04-30 00:38:54.549 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.551612 containerd[1740]: time="2025-04-30T00:38:54.551041791Z" level=info msg="TearDown network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" successfully" Apr 30 00:38:54.551612 containerd[1740]: time="2025-04-30T00:38:54.551065431Z" level=info msg="StopPodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" returns successfully" Apr 30 00:38:54.553184 containerd[1740]: time="2025-04-30T00:38:54.551899592Z" level=info msg="RemovePodSandbox for \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" Apr 30 00:38:54.553184 containerd[1740]: time="2025-04-30T00:38:54.551944952Z" level=info msg="Forcibly stopping sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\"" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.617 [WARNING][5821] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0", GenerateName:"calico-kube-controllers-854657b6f6-", Namespace:"calico-system", SelfLink:"", UID:"538a6392-809d-4846-83ca-90e20dd564a7", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854657b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"d1e58347129bc7c1264dde42239a9a310a98bccf4b240a21dd4186a9596aaa60", Pod:"calico-kube-controllers-854657b6f6-9ld68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc223f17b57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.618 [INFO][5821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.618 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" iface="eth0" netns="" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.618 [INFO][5821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.618 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.640 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.640 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.640 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.648 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.648 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" HandleID="k8s-pod-network.b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--kube--controllers--854657b6f6--9ld68-eth0" Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.650 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.654338 containerd[1740]: 2025-04-30 00:38:54.651 [INFO][5821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27" Apr 30 00:38:54.654706 containerd[1740]: time="2025-04-30T00:38:54.654351443Z" level=info msg="TearDown network for sandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" successfully" Apr 30 00:38:54.662984 containerd[1740]: time="2025-04-30T00:38:54.662938928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:54.663087 containerd[1740]: time="2025-04-30T00:38:54.663038928Z" level=info msg="RemovePodSandbox \"b72cae2b2dbbe01f8a65555394c6d98b69eba44ecc5a13a93733ab3410befb27\" returns successfully" Apr 30 00:38:54.663666 containerd[1740]: time="2025-04-30T00:38:54.663555008Z" level=info msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.720 [WARNING][5849] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb2e93f5-34f8-40e2-8427-80d1c7db355a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e", Pod:"csi-node-driver-7ppvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid60038a92c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.721 [INFO][5849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.721 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" iface="eth0" netns="" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.721 [INFO][5849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.721 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.745 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.745 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.745 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.758 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.758 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.759 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.761941 containerd[1740]: 2025-04-30 00:38:54.760 [INFO][5849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.762470 containerd[1740]: time="2025-04-30T00:38:54.761975537Z" level=info msg="TearDown network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" successfully" Apr 30 00:38:54.762470 containerd[1740]: time="2025-04-30T00:38:54.761999417Z" level=info msg="StopPodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" returns successfully" Apr 30 00:38:54.763270 containerd[1740]: time="2025-04-30T00:38:54.762952178Z" level=info msg="RemovePodSandbox for \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" Apr 30 00:38:54.763270 containerd[1740]: time="2025-04-30T00:38:54.762997858Z" level=info msg="Forcibly stopping sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\"" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.804 [WARNING][5874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb2e93f5-34f8-40e2-8427-80d1c7db355a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"591cd9cab9380b540bb1feeee3b32f39a3ea0e09c19cbc5f07390279b5cbce2e", Pod:"csi-node-driver-7ppvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid60038a92c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.804 [INFO][5874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.804 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" iface="eth0" netns="" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.804 [INFO][5874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.804 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.826 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.826 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.826 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.834 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.834 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" HandleID="k8s-pod-network.747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Workload="ci--4081.3.3--a--8ba35441fd-k8s-csi--node--driver--7ppvf-eth0" Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.835 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.837919 containerd[1740]: 2025-04-30 00:38:54.836 [INFO][5874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60" Apr 30 00:38:54.838325 containerd[1740]: time="2025-04-30T00:38:54.838199776Z" level=info msg="TearDown network for sandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" successfully" Apr 30 00:38:54.844826 containerd[1740]: time="2025-04-30T00:38:54.844788179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:54.844887 containerd[1740]: time="2025-04-30T00:38:54.844857539Z" level=info msg="RemovePodSandbox \"747301b7e3489c06b1adfea9572cf7fc6b18637a0b98a34a66e182905feb9e60\" returns successfully" Apr 30 00:38:54.845554 containerd[1740]: time="2025-04-30T00:38:54.845513380Z" level=info msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.887 [WARNING][5899] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d52508d6-cb87-4a3e-bc62-6a667d5c126a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a", Pod:"calico-apiserver-68866747c9-jg8pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif83e39b3c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.888 [INFO][5899] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.888 [INFO][5899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" iface="eth0" netns="" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.888 [INFO][5899] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.888 [INFO][5899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.906 [INFO][5907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.906 [INFO][5907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.906 [INFO][5907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.914 [WARNING][5907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.914 [INFO][5907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.916 [INFO][5907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.919570 containerd[1740]: 2025-04-30 00:38:54.917 [INFO][5899] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.919570 containerd[1740]: time="2025-04-30T00:38:54.918537456Z" level=info msg="TearDown network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" successfully" Apr 30 00:38:54.919570 containerd[1740]: time="2025-04-30T00:38:54.918560896Z" level=info msg="StopPodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" returns successfully" Apr 30 00:38:54.921201 containerd[1740]: time="2025-04-30T00:38:54.920817178Z" level=info msg="RemovePodSandbox for \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" Apr 30 00:38:54.921201 containerd[1740]: time="2025-04-30T00:38:54.920860258Z" level=info msg="Forcibly stopping sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\"" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.963 [WARNING][5925] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d52508d6-cb87-4a3e-bc62-6a667d5c126a", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"53f5afdec686e88559671ca536a584a3c54de6bf83012fbdbd26a209754e000a", Pod:"calico-apiserver-68866747c9-jg8pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif83e39b3c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.963 [INFO][5925] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.965 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" iface="eth0" netns="" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.965 [INFO][5925] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.965 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.985 [INFO][5933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.985 [INFO][5933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.985 [INFO][5933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.994 [WARNING][5933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.994 [INFO][5933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" HandleID="k8s-pod-network.fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--jg8pz-eth0" Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.995 [INFO][5933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:54.998082 containerd[1740]: 2025-04-30 00:38:54.996 [INFO][5925] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96" Apr 30 00:38:54.998523 containerd[1740]: time="2025-04-30T00:38:54.998129216Z" level=info msg="TearDown network for sandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" successfully" Apr 30 00:38:55.005682 containerd[1740]: time="2025-04-30T00:38:55.005623860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:55.007081 containerd[1740]: time="2025-04-30T00:38:55.005699580Z" level=info msg="RemovePodSandbox \"fce3f7c583b4e0d158b3309dcf541388709c52b592127b7e12e354720ea30f96\" returns successfully" Apr 30 00:38:55.007081 containerd[1740]: time="2025-04-30T00:38:55.006206541Z" level=info msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.046 [WARNING][5953] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39b31d14-adbd-40cc-aaae-630914635b7c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7", Pod:"coredns-7db6d8ff4d-xhxlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6da6b2cf131", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.047 [INFO][5953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.047 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" iface="eth0" netns="" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.047 [INFO][5953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.047 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.073 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.074 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.074 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.085 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.086 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.087 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:55.091098 containerd[1740]: 2025-04-30 00:38:55.089 [INFO][5953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.092185 containerd[1740]: time="2025-04-30T00:38:55.091647344Z" level=info msg="TearDown network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" successfully" Apr 30 00:38:55.092185 containerd[1740]: time="2025-04-30T00:38:55.091812264Z" level=info msg="StopPodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" returns successfully" Apr 30 00:38:55.092774 containerd[1740]: time="2025-04-30T00:38:55.092650744Z" level=info msg="RemovePodSandbox for \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" Apr 30 00:38:55.092774 containerd[1740]: time="2025-04-30T00:38:55.092680704Z" level=info msg="Forcibly stopping sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\"" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.141 [WARNING][5978] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"39b31d14-adbd-40cc-aaae-630914635b7c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"083d89f15ae4bea05978cc8ac653138eaf649551e2cf3755648731fbab7525e7", Pod:"coredns-7db6d8ff4d-xhxlm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6da6b2cf131", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.141 [INFO][5978] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.141 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" iface="eth0" netns="" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.141 [INFO][5978] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.142 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.159 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.159 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.159 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.169 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.169 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" HandleID="k8s-pod-network.0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Workload="ci--4081.3.3--a--8ba35441fd-k8s-coredns--7db6d8ff4d--xhxlm-eth0" Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.171 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:55.175072 containerd[1740]: 2025-04-30 00:38:55.172 [INFO][5978] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c" Apr 30 00:38:55.175072 containerd[1740]: time="2025-04-30T00:38:55.173965745Z" level=info msg="TearDown network for sandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" successfully" Apr 30 00:38:55.181914 containerd[1740]: time="2025-04-30T00:38:55.181859909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:55.182036 containerd[1740]: time="2025-04-30T00:38:55.181927709Z" level=info msg="RemovePodSandbox \"0c033033c8da262dd7a3fb5d71caac3eeef5162f3ad3efa6cdaffe4ab7a05c3c\" returns successfully" Apr 30 00:38:55.182560 containerd[1740]: time="2025-04-30T00:38:55.182531309Z" level=info msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.232 [WARNING][6004] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebe487b6-7b72-4107-a215-c47c7bf75a1e", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349", Pod:"calico-apiserver-68866747c9-tfllp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica849921580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.232 [INFO][6004] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.232 [INFO][6004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" iface="eth0" netns="" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.232 [INFO][6004] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.232 [INFO][6004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.263 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.263 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.263 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.276 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.276 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.277 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:55.281525 containerd[1740]: 2025-04-30 00:38:55.280 [INFO][6004] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.281974 containerd[1740]: time="2025-04-30T00:38:55.281570519Z" level=info msg="TearDown network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" successfully" Apr 30 00:38:55.281974 containerd[1740]: time="2025-04-30T00:38:55.281595719Z" level=info msg="StopPodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" returns successfully" Apr 30 00:38:55.282530 containerd[1740]: time="2025-04-30T00:38:55.282084760Z" level=info msg="RemovePodSandbox for \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" Apr 30 00:38:55.282530 containerd[1740]: time="2025-04-30T00:38:55.282121640Z" level=info msg="Forcibly stopping sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\"" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.317 [WARNING][6029] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0", GenerateName:"calico-apiserver-68866747c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebe487b6-7b72-4107-a215-c47c7bf75a1e", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68866747c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-8ba35441fd", ContainerID:"077e5d54852171c925d1a8b53c334ba7ba3d0bf3eb245b295d6ed5d5b400c349", Pod:"calico-apiserver-68866747c9-tfllp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica849921580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.317 [INFO][6029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.318 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" iface="eth0" netns="" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.318 [INFO][6029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.318 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.338 [INFO][6036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.338 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.338 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.346 [WARNING][6036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.346 [INFO][6036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" HandleID="k8s-pod-network.5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Workload="ci--4081.3.3--a--8ba35441fd-k8s-calico--apiserver--68866747c9--tfllp-eth0" Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.347 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:38:55.350241 containerd[1740]: 2025-04-30 00:38:55.349 [INFO][6029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b" Apr 30 00:38:55.351835 containerd[1740]: time="2025-04-30T00:38:55.350613634Z" level=info msg="TearDown network for sandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" successfully" Apr 30 00:38:55.361307 containerd[1740]: time="2025-04-30T00:38:55.361264479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:38:55.361404 containerd[1740]: time="2025-04-30T00:38:55.361335479Z" level=info msg="RemovePodSandbox \"5478cfe9b6f691429b756a36c28eabb06aa06d80d842600c4c750c754d93ee8b\" returns successfully" Apr 30 00:39:27.784074 systemd[1]: run-containerd-runc-k8s.io-4c177804c4b72387cf6fc1cc629a351de626676583258d182b80ba27526e43ff-runc.OdYuwR.mount: Deactivated successfully. Apr 30 00:40:01.510893 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:48994.service - OpenSSH per-connection server daemon (10.200.16.10:48994). Apr 30 00:40:01.958023 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 48994 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:01.959472 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:01.963635 systemd-logind[1684]: New session 10 of user core. Apr 30 00:40:01.971301 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:40:02.345349 sshd[6199]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:02.349376 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:48994.service: Deactivated successfully. Apr 30 00:40:02.351625 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:40:02.352677 systemd-logind[1684]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:40:02.354144 systemd-logind[1684]: Removed session 10. Apr 30 00:40:07.435474 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:49004.service - OpenSSH per-connection server daemon (10.200.16.10:49004). Apr 30 00:40:07.875696 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 49004 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:07.877016 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:07.880856 systemd-logind[1684]: New session 11 of user core. Apr 30 00:40:07.887318 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:40:08.265133 sshd[6240]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:08.268832 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:49004.service: Deactivated successfully. Apr 30 00:40:08.271855 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:40:08.272887 systemd-logind[1684]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:40:08.273895 systemd-logind[1684]: Removed session 11. Apr 30 00:40:13.351383 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:55582.service - OpenSSH per-connection server daemon (10.200.16.10:55582). Apr 30 00:40:13.793819 sshd[6256]: Accepted publickey for core from 10.200.16.10 port 55582 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:13.795271 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:13.800775 systemd-logind[1684]: New session 12 of user core. Apr 30 00:40:13.806297 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:40:14.180351 sshd[6256]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:14.184866 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:55582.service: Deactivated successfully. Apr 30 00:40:14.187176 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:40:14.189814 systemd-logind[1684]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:40:14.191271 systemd-logind[1684]: Removed session 12. Apr 30 00:40:19.263043 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:43090.service - OpenSSH per-connection server daemon (10.200.16.10:43090). Apr 30 00:40:19.707748 sshd[6275]: Accepted publickey for core from 10.200.16.10 port 43090 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:19.709086 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:19.713212 systemd-logind[1684]: New session 13 of user core. Apr 30 00:40:19.720296 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:40:20.090510 sshd[6275]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:20.094089 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:43090.service: Deactivated successfully. Apr 30 00:40:20.096067 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:40:20.098007 systemd-logind[1684]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:40:20.099876 systemd-logind[1684]: Removed session 13. Apr 30 00:40:25.172802 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:43104.service - OpenSSH per-connection server daemon (10.200.16.10:43104). Apr 30 00:40:25.629597 sshd[6323]: Accepted publickey for core from 10.200.16.10 port 43104 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:25.631094 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:25.636244 systemd-logind[1684]: New session 14 of user core. Apr 30 00:40:25.641292 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:40:26.015614 sshd[6323]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:26.019307 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:43104.service: Deactivated successfully. Apr 30 00:40:26.020961 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:40:26.022732 systemd-logind[1684]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:40:26.024025 systemd-logind[1684]: Removed session 14. Apr 30 00:40:31.091644 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:59768.service - OpenSSH per-connection server daemon (10.200.16.10:59768). Apr 30 00:40:31.501715 sshd[6355]: Accepted publickey for core from 10.200.16.10 port 59768 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:31.503079 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:31.506856 systemd-logind[1684]: New session 15 of user core. Apr 30 00:40:31.516309 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:40:31.867357 sshd[6355]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:31.871304 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:59768.service: Deactivated successfully. Apr 30 00:40:31.873895 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:40:31.874788 systemd-logind[1684]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:40:31.875888 systemd-logind[1684]: Removed session 15. Apr 30 00:40:31.952484 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:59772.service - OpenSSH per-connection server daemon (10.200.16.10:59772). Apr 30 00:40:32.399087 sshd[6369]: Accepted publickey for core from 10.200.16.10 port 59772 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:32.400507 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:32.404367 systemd-logind[1684]: New session 16 of user core. Apr 30 00:40:32.412337 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:40:32.814678 sshd[6369]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:32.818358 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:59772.service: Deactivated successfully. Apr 30 00:40:32.820796 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:40:32.822566 systemd-logind[1684]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:40:32.823863 systemd-logind[1684]: Removed session 16. Apr 30 00:40:32.904465 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:59776.service - OpenSSH per-connection server daemon (10.200.16.10:59776). Apr 30 00:40:33.345602 sshd[6379]: Accepted publickey for core from 10.200.16.10 port 59776 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:33.346960 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:33.350830 systemd-logind[1684]: New session 17 of user core. Apr 30 00:40:33.356298 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:40:33.728377 sshd[6379]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:33.730653 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:59776.service: Deactivated successfully. Apr 30 00:40:33.732809 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:40:33.734306 systemd-logind[1684]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:40:33.735225 systemd-logind[1684]: Removed session 17. Apr 30 00:40:38.811418 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:59790.service - OpenSSH per-connection server daemon (10.200.16.10:59790). Apr 30 00:40:39.250510 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 59790 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:39.251999 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:39.256236 systemd-logind[1684]: New session 18 of user core. Apr 30 00:40:39.260385 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:40:39.637531 sshd[6392]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:39.641292 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:59790.service: Deactivated successfully. Apr 30 00:40:39.643035 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:40:39.643808 systemd-logind[1684]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:40:39.645014 systemd-logind[1684]: Removed session 18. Apr 30 00:40:44.719035 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:58502.service - OpenSSH per-connection server daemon (10.200.16.10:58502). Apr 30 00:40:45.162061 sshd[6407]: Accepted publickey for core from 10.200.16.10 port 58502 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:45.163189 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:45.168078 systemd-logind[1684]: New session 19 of user core. Apr 30 00:40:45.173297 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:40:45.542634 sshd[6407]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:45.546229 systemd-logind[1684]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:40:45.546787 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:58502.service: Deactivated successfully. Apr 30 00:40:45.549572 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:40:45.550596 systemd-logind[1684]: Removed session 19. Apr 30 00:40:50.617956 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:47154.service - OpenSSH per-connection server daemon (10.200.16.10:47154). Apr 30 00:40:51.027050 sshd[6426]: Accepted publickey for core from 10.200.16.10 port 47154 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:51.028449 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:51.033218 systemd-logind[1684]: New session 20 of user core. Apr 30 00:40:51.037307 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:40:51.396369 sshd[6426]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:51.400240 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:47154.service: Deactivated successfully. Apr 30 00:40:51.402822 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:40:51.403984 systemd-logind[1684]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:40:51.405129 systemd-logind[1684]: Removed session 20. Apr 30 00:40:56.479485 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:47156.service - OpenSSH per-connection server daemon (10.200.16.10:47156). Apr 30 00:40:56.929782 sshd[6462]: Accepted publickey for core from 10.200.16.10 port 47156 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:40:56.930932 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:40:56.936755 systemd-logind[1684]: New session 21 of user core. Apr 30 00:40:56.943318 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:40:57.313809 sshd[6462]: pam_unix(sshd:session): session closed for user core Apr 30 00:40:57.317405 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:47156.service: Deactivated successfully. Apr 30 00:40:57.319749 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:40:57.321438 systemd-logind[1684]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:40:57.322568 systemd-logind[1684]: Removed session 21. Apr 30 00:41:02.404425 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:51174.service - OpenSSH per-connection server daemon (10.200.16.10:51174). Apr 30 00:41:02.850769 sshd[6494]: Accepted publickey for core from 10.200.16.10 port 51174 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:02.851599 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:02.855755 systemd-logind[1684]: New session 22 of user core. Apr 30 00:41:02.862320 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:41:03.232884 sshd[6494]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:03.236437 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:51174.service: Deactivated successfully. Apr 30 00:41:03.238510 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:41:03.239278 systemd-logind[1684]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:41:03.240077 systemd-logind[1684]: Removed session 22. Apr 30 00:41:08.315407 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:51180.service - OpenSSH per-connection server daemon (10.200.16.10:51180). Apr 30 00:41:08.721689 sshd[6526]: Accepted publickey for core from 10.200.16.10 port 51180 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:08.723027 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:08.727342 systemd-logind[1684]: New session 23 of user core. Apr 30 00:41:08.735324 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:41:09.082713 sshd[6526]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:09.085409 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:51180.service: Deactivated successfully. Apr 30 00:41:09.087866 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:41:09.089779 systemd-logind[1684]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:41:09.091286 systemd-logind[1684]: Removed session 23. Apr 30 00:41:14.159684 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:54636.service - OpenSSH per-connection server daemon (10.200.16.10:54636). Apr 30 00:41:14.569647 sshd[6542]: Accepted publickey for core from 10.200.16.10 port 54636 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:14.570982 sshd[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:14.574733 systemd-logind[1684]: New session 24 of user core. Apr 30 00:41:14.579300 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:41:14.931946 sshd[6542]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:14.936708 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:54636.service: Deactivated successfully. Apr 30 00:41:14.938998 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:41:14.940096 systemd-logind[1684]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:41:14.941186 systemd-logind[1684]: Removed session 24. Apr 30 00:41:15.018394 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:54650.service - OpenSSH per-connection server daemon (10.200.16.10:54650). Apr 30 00:41:15.466395 sshd[6554]: Accepted publickey for core from 10.200.16.10 port 54650 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:15.467707 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:15.472241 systemd-logind[1684]: New session 25 of user core. Apr 30 00:41:15.477374 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:41:15.961331 sshd[6554]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:15.965093 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:54650.service: Deactivated successfully. Apr 30 00:41:15.966780 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:41:15.967473 systemd-logind[1684]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:41:15.968336 systemd-logind[1684]: Removed session 25. Apr 30 00:41:16.039515 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:54658.service - OpenSSH per-connection server daemon (10.200.16.10:54658). Apr 30 00:41:16.449621 sshd[6565]: Accepted publickey for core from 10.200.16.10 port 54658 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:16.451011 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:16.455354 systemd-logind[1684]: New session 26 of user core. Apr 30 00:41:16.462290 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:41:18.398659 sshd[6565]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:18.402104 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:54658.service: Deactivated successfully. Apr 30 00:41:18.404071 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:41:18.405248 systemd-logind[1684]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:41:18.406125 systemd-logind[1684]: Removed session 26. Apr 30 00:41:18.480972 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:54664.service - OpenSSH per-connection server daemon (10.200.16.10:54664). Apr 30 00:41:18.926003 sshd[6583]: Accepted publickey for core from 10.200.16.10 port 54664 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:18.927406 sshd[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:18.932066 systemd-logind[1684]: New session 27 of user core. Apr 30 00:41:18.937307 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:41:19.409657 sshd[6583]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:19.413016 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:54664.service: Deactivated successfully. Apr 30 00:41:19.414832 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:41:19.415542 systemd-logind[1684]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:41:19.416730 systemd-logind[1684]: Removed session 27. Apr 30 00:41:19.490141 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:49604.service - OpenSSH per-connection server daemon (10.200.16.10:49604). Apr 30 00:41:19.931426 sshd[6594]: Accepted publickey for core from 10.200.16.10 port 49604 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:19.932798 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:19.938029 systemd-logind[1684]: New session 28 of user core. Apr 30 00:41:19.944299 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:41:20.310875 sshd[6594]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:20.313548 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:49604.service: Deactivated successfully. Apr 30 00:41:20.315824 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:41:20.318564 systemd-logind[1684]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:41:20.319788 systemd-logind[1684]: Removed session 28. Apr 30 00:41:25.397452 systemd[1]: Started sshd@26-10.200.20.14:22-10.200.16.10:49620.service - OpenSSH per-connection server daemon (10.200.16.10:49620). Apr 30 00:41:25.845001 sshd[6627]: Accepted publickey for core from 10.200.16.10 port 49620 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:25.846376 sshd[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:25.850719 systemd-logind[1684]: New session 29 of user core. Apr 30 00:41:25.856331 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:41:26.223784 sshd[6627]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:26.227007 systemd[1]: sshd@26-10.200.20.14:22-10.200.16.10:49620.service: Deactivated successfully. Apr 30 00:41:26.230842 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:41:26.231795 systemd-logind[1684]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:41:26.233359 systemd-logind[1684]: Removed session 29. Apr 30 00:41:31.302403 systemd[1]: Started sshd@27-10.200.20.14:22-10.200.16.10:55220.service - OpenSSH per-connection server daemon (10.200.16.10:55220). Apr 30 00:41:31.719354 sshd[6669]: Accepted publickey for core from 10.200.16.10 port 55220 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:31.720739 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:31.725334 systemd-logind[1684]: New session 30 of user core. Apr 30 00:41:31.733320 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 00:41:32.081914 sshd[6669]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:32.085711 systemd[1]: sshd@27-10.200.20.14:22-10.200.16.10:55220.service: Deactivated successfully. Apr 30 00:41:32.087907 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 00:41:32.088793 systemd-logind[1684]: Session 30 logged out. Waiting for processes to exit. Apr 30 00:41:32.089890 systemd-logind[1684]: Removed session 30. Apr 30 00:41:37.161669 systemd[1]: Started sshd@28-10.200.20.14:22-10.200.16.10:55228.service - OpenSSH per-connection server daemon (10.200.16.10:55228). Apr 30 00:41:37.613457 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 55228 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:37.614902 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:37.620227 systemd-logind[1684]: New session 31 of user core. Apr 30 00:41:37.626311 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 00:41:37.997985 sshd[6684]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:38.001364 systemd[1]: sshd@28-10.200.20.14:22-10.200.16.10:55228.service: Deactivated successfully. Apr 30 00:41:38.002984 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 00:41:38.003645 systemd-logind[1684]: Session 31 logged out. Waiting for processes to exit. Apr 30 00:41:38.005330 systemd-logind[1684]: Removed session 31. Apr 30 00:41:43.074340 systemd[1]: Started sshd@29-10.200.20.14:22-10.200.16.10:44366.service - OpenSSH per-connection server daemon (10.200.16.10:44366). Apr 30 00:41:43.491437 sshd[6703]: Accepted publickey for core from 10.200.16.10 port 44366 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:43.492802 sshd[6703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:43.497470 systemd-logind[1684]: New session 32 of user core. Apr 30 00:41:43.502293 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 30 00:41:43.854399 sshd[6703]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:43.858126 systemd[1]: sshd@29-10.200.20.14:22-10.200.16.10:44366.service: Deactivated successfully. Apr 30 00:41:43.860097 systemd[1]: session-32.scope: Deactivated successfully. Apr 30 00:41:43.860976 systemd-logind[1684]: Session 32 logged out. Waiting for processes to exit. Apr 30 00:41:43.862111 systemd-logind[1684]: Removed session 32. Apr 30 00:41:48.932436 systemd[1]: Started sshd@30-10.200.20.14:22-10.200.16.10:44168.service - OpenSSH per-connection server daemon (10.200.16.10:44168). Apr 30 00:41:49.352627 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 44168 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:49.354240 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:49.358348 systemd-logind[1684]: New session 33 of user core. Apr 30 00:41:49.363301 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 30 00:41:49.713412 sshd[6720]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:49.717085 systemd[1]: sshd@30-10.200.20.14:22-10.200.16.10:44168.service: Deactivated successfully. Apr 30 00:41:49.719835 systemd[1]: session-33.scope: Deactivated successfully. Apr 30 00:41:49.720964 systemd-logind[1684]: Session 33 logged out. Waiting for processes to exit. Apr 30 00:41:49.721896 systemd-logind[1684]: Removed session 33. Apr 30 00:41:54.797329 systemd[1]: Started sshd@31-10.200.20.14:22-10.200.16.10:44176.service - OpenSSH per-connection server daemon (10.200.16.10:44176). Apr 30 00:41:55.245731 sshd[6771]: Accepted publickey for core from 10.200.16.10 port 44176 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:41:55.247118 sshd[6771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:41:55.251145 systemd-logind[1684]: New session 34 of user core. Apr 30 00:41:55.255296 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 30 00:41:55.632197 sshd[6771]: pam_unix(sshd:session): session closed for user core Apr 30 00:41:55.635785 systemd[1]: sshd@31-10.200.20.14:22-10.200.16.10:44176.service: Deactivated successfully. Apr 30 00:41:55.637768 systemd[1]: session-34.scope: Deactivated successfully. Apr 30 00:41:55.638627 systemd-logind[1684]: Session 34 logged out. Waiting for processes to exit. Apr 30 00:41:55.639859 systemd-logind[1684]: Removed session 34. Apr 30 00:42:00.706863 systemd[1]: Started sshd@32-10.200.20.14:22-10.200.16.10:38728.service - OpenSSH per-connection server daemon (10.200.16.10:38728). Apr 30 00:42:01.117120 sshd[6807]: Accepted publickey for core from 10.200.16.10 port 38728 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:42:01.118254 sshd[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:42:01.122212 systemd-logind[1684]: New session 35 of user core. Apr 30 00:42:01.130375 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 30 00:42:01.471459 sshd[6807]: pam_unix(sshd:session): session closed for user core Apr 30 00:42:01.475206 systemd[1]: sshd@32-10.200.20.14:22-10.200.16.10:38728.service: Deactivated successfully. Apr 30 00:42:01.477590 systemd[1]: session-35.scope: Deactivated successfully. Apr 30 00:42:01.478476 systemd-logind[1684]: Session 35 logged out. Waiting for processes to exit. Apr 30 00:42:01.479377 systemd-logind[1684]: Removed session 35. Apr 30 00:42:06.553772 systemd[1]: Started sshd@33-10.200.20.14:22-10.200.16.10:38742.service - OpenSSH per-connection server daemon (10.200.16.10:38742). Apr 30 00:42:06.962833 sshd[6820]: Accepted publickey for core from 10.200.16.10 port 38742 ssh2: RSA SHA256:ztpvO7lq7UFkG/gUNSQtdxecuZ/3hQtQILcGfuKW7pw Apr 30 00:42:06.964585 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:42:06.968743 systemd-logind[1684]: New session 36 of user core. Apr 30 00:42:06.972410 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 30 00:42:07.320611 sshd[6820]: pam_unix(sshd:session): session closed for user core Apr 30 00:42:07.324654 systemd[1]: sshd@33-10.200.20.14:22-10.200.16.10:38742.service: Deactivated successfully. Apr 30 00:42:07.326930 systemd[1]: session-36.scope: Deactivated successfully. Apr 30 00:42:07.327847 systemd-logind[1684]: Session 36 logged out. Waiting for processes to exit. Apr 30 00:42:07.328994 systemd-logind[1684]: Removed session 36.