Mar 17 17:50:58.406074 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:50:58.406096 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:50:58.406104 kernel: KASLR enabled Mar 17 17:50:58.406109 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 17:50:58.406117 kernel: printk: bootconsole [pl11] enabled Mar 17 17:50:58.406122 kernel: efi: EFI v2.7 by EDK II Mar 17 17:50:58.406129 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Mar 17 17:50:58.406135 kernel: random: crng init done Mar 17 17:50:58.406140 kernel: secureboot: Secure boot disabled Mar 17 17:50:58.406146 kernel: ACPI: Early table checksum verification disabled Mar 17 17:50:58.406152 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 17:50:58.406158 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406163 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406171 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 17:50:58.406178 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406184 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406190 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406197 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406204 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406210 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406216 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 17:50:58.406222 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.406228 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 17:50:58.406234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 17 17:50:58.406240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 17 17:50:58.406246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 17 17:50:58.406252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 17 17:50:58.406258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 17 17:50:58.406265 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 17 17:50:58.406272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 17 17:50:58.406278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 17 17:50:58.406284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 17 17:50:58.406290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 17 17:50:58.406296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 17 17:50:58.406302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 17 17:50:58.406307 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 17 17:50:58.406313 kernel: Zone ranges: Mar 17 17:50:58.406342 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 17:50:58.406348 kernel: DMA32 empty Mar 17 17:50:58.406355 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:50:58.406365 kernel: Movable zone start for each node Mar 17 17:50:58.406371 kernel: Early memory node ranges Mar 17 17:50:58.406378 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 17:50:58.406390 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 17 17:50:58.406413 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 17 17:50:58.406422 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 17 17:50:58.406428 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 17:50:58.406437 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 17:50:58.406443 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 17:50:58.406453 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 17:50:58.406459 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:50:58.406465 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 17:50:58.406472 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 17:50:58.406478 kernel: psci: probing for conduit method from ACPI. Mar 17 17:50:58.406484 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:50:58.406491 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:50:58.406497 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 17:50:58.406504 kernel: psci: SMC Calling Convention v1.4 Mar 17 17:50:58.406511 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 17 17:50:58.406517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 17 17:50:58.406523 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:50:58.406530 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:50:58.406536 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:50:58.406543 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:50:58.406549 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:50:58.406556 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:50:58.406562 kernel: CPU features: detected: Spectre-BHB Mar 17 17:50:58.406568 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:50:58.406577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:50:58.406583 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:50:58.406589 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 17:50:58.406596 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:50:58.406602 kernel: alternatives: applying boot alternatives Mar 17 17:50:58.406613 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:50:58.406622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:50:58.406633 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:50:58.406642 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:50:58.406651 kernel: Fallback order for Node 0: 0 Mar 17 17:50:58.406659 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 17:50:58.406668 kernel: Policy zone: Normal Mar 17 17:50:58.406676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:50:58.406683 kernel: software IO TLB: area num 2. Mar 17 17:50:58.406690 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Mar 17 17:50:58.406705 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Mar 17 17:50:58.406715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:50:58.406722 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:50:58.406730 kernel: rcu: RCU event tracing is enabled. Mar 17 17:50:58.406738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:50:58.406745 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:50:58.406753 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:50:58.406762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:50:58.406770 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:50:58.406777 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:50:58.406784 kernel: GICv3: 960 SPIs implemented Mar 17 17:50:58.406790 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:50:58.406796 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:50:58.406803 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:50:58.406810 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 17:50:58.406818 kernel: ITS: No ITS available, not enabling LPIs Mar 17 17:50:58.406826 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:50:58.406834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:50:58.406842 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:50:58.406851 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:50:58.406858 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:50:58.406866 kernel: Console: colour dummy device 80x25 Mar 17 17:50:58.406874 kernel: printk: console [tty1] enabled Mar 17 17:50:58.406882 kernel: ACPI: Core revision 20230628 Mar 17 17:50:58.406890 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:50:58.406897 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:50:58.406904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:50:58.406910 kernel: landlock: Up and running. Mar 17 17:50:58.406919 kernel: SELinux: Initializing. Mar 17 17:50:58.406926 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.406932 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.406939 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:50:58.406946 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:50:58.406952 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 17:50:58.406959 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 17:50:58.406972 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:50:58.406979 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:50:58.406986 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:50:58.406993 kernel: Remapping and enabling EFI services. Mar 17 17:50:58.407000 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:50:58.407008 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:50:58.407015 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 17:50:58.407022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:50:58.407029 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:50:58.407036 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:50:58.407045 kernel: SMP: Total of 2 processors activated. Mar 17 17:50:58.407052 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:50:58.407059 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 17:50:58.407066 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:50:58.407073 kernel: CPU features: detected: CRC32 instructions Mar 17 17:50:58.407080 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:50:58.407087 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:50:58.407093 kernel: CPU features: detected: Privileged Access Never Mar 17 17:50:58.407100 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:50:58.407109 kernel: alternatives: applying system-wide alternatives Mar 17 17:50:58.407115 kernel: devtmpfs: initialized Mar 17 17:50:58.407123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:50:58.407130 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:50:58.407136 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:50:58.407143 kernel: SMBIOS 3.1.0 present. Mar 17 17:50:58.407150 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 17:50:58.407157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:50:58.407164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:50:58.407173 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:50:58.407180 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:50:58.407187 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:50:58.407194 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 17 17:50:58.407201 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:50:58.407207 kernel: cpuidle: using governor menu Mar 17 17:50:58.407214 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:50:58.407221 kernel: ASID allocator initialised with 32768 entries Mar 17 17:50:58.407228 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:50:58.407236 kernel: Serial: AMBA PL011 UART driver Mar 17 17:50:58.407243 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:50:58.407250 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:50:58.407257 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:50:58.407264 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:50:58.407271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:50:58.407278 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:50:58.407285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:50:58.407292 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:50:58.407300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:50:58.407307 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:50:58.407314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:50:58.407330 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:50:58.407337 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:50:58.407344 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:50:58.407351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:50:58.407358 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:50:58.407365 kernel: ACPI: Interpreter enabled Mar 17 17:50:58.407373 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:50:58.407380 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:50:58.407387 kernel: printk: console [ttyAMA0] enabled Mar 17 17:50:58.407394 kernel: printk: bootconsole [pl11] disabled Mar 17 17:50:58.407401 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 17:50:58.407408 kernel: iommu: Default domain type: Translated Mar 17 17:50:58.407415 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:50:58.407422 kernel: efivars: Registered efivars operations Mar 17 17:50:58.407428 kernel: vgaarb: loaded Mar 17 17:50:58.407438 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:50:58.407445 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:50:58.407452 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:50:58.407459 kernel: pnp: PnP ACPI init Mar 17 17:50:58.407466 kernel: pnp: PnP ACPI: found 0 devices Mar 17 17:50:58.407473 kernel: NET: Registered PF_INET protocol family Mar 17 17:50:58.407480 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:50:58.407487 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:50:58.407494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:50:58.407502 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:50:58.407510 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:50:58.407517 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:50:58.407524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.407531 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.407538 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:50:58.407544 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:50:58.407551 kernel: kvm [1]: HYP mode not available Mar 17 17:50:58.407558 kernel: Initialise system trusted keyrings Mar 17 17:50:58.407566 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:50:58.407573 kernel: Key type asymmetric registered Mar 17 17:50:58.407580 kernel: Asymmetric key parser 'x509' registered Mar 17 17:50:58.407587 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:50:58.407594 kernel: io scheduler mq-deadline registered Mar 17 17:50:58.407601 kernel: io scheduler kyber registered Mar 17 17:50:58.407607 kernel: io scheduler bfq registered Mar 17 17:50:58.407614 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:50:58.407621 kernel: thunder_xcv, ver 1.0 Mar 17 17:50:58.407630 kernel: thunder_bgx, ver 1.0 Mar 17 17:50:58.407636 kernel: nicpf, ver 1.0 Mar 17 17:50:58.407652 kernel: nicvf, ver 1.0 Mar 17 17:50:58.407803 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:50:58.407873 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:50:57 UTC (1742233857) Mar 17 17:50:58.407882 kernel: efifb: probing for efifb Mar 17 17:50:58.407890 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:50:58.407897 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:50:58.407907 kernel: efifb: scrolling: redraw Mar 17 17:50:58.407914 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:50:58.407921 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:50:58.407928 kernel: fb0: EFI VGA frame buffer device Mar 17 17:50:58.407935 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 17:50:58.407941 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:50:58.407948 kernel: No ACPI PMU IRQ for CPU0 Mar 17 17:50:58.407955 kernel: No ACPI PMU IRQ for CPU1 Mar 17 17:50:58.407962 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 17:50:58.407971 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:50:58.407978 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:50:58.407985 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:50:58.407992 kernel: Segment Routing with IPv6 Mar 17 17:50:58.407999 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:50:58.408006 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:50:58.408012 kernel: Key type dns_resolver registered Mar 17 17:50:58.408019 kernel: registered taskstats version 1 Mar 17 17:50:58.408026 kernel: Loading compiled-in X.509 certificates Mar 17 17:50:58.408035 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:50:58.408042 kernel: Key type .fscrypt registered Mar 17 17:50:58.408049 kernel: Key type fscrypt-provisioning registered Mar 17 17:50:58.408056 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:50:58.408063 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:50:58.408070 kernel: ima: No architecture policies found Mar 17 17:50:58.408077 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:50:58.408084 kernel: clk: Disabling unused clocks Mar 17 17:50:58.408090 kernel: Freeing unused kernel memory: 38336K Mar 17 17:50:58.408099 kernel: Run /init as init process Mar 17 17:50:58.408106 kernel: with arguments: Mar 17 17:50:58.408113 kernel: /init Mar 17 17:50:58.408119 kernel: with environment: Mar 17 17:50:58.408126 kernel: HOME=/ Mar 17 17:50:58.408133 kernel: TERM=linux Mar 17 17:50:58.408140 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:50:58.408148 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:50:58.408159 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:50:58.408167 systemd[1]: Detected virtualization microsoft. Mar 17 17:50:58.408174 systemd[1]: Detected architecture arm64. Mar 17 17:50:58.408181 systemd[1]: Running in initrd. Mar 17 17:50:58.408189 systemd[1]: No hostname configured, using default hostname. Mar 17 17:50:58.408196 systemd[1]: Hostname set to . Mar 17 17:50:58.408203 systemd[1]: Initializing machine ID from random generator. Mar 17 17:50:58.408211 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:50:58.408220 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:50:58.408227 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:50:58.408235 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:50:58.408243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:50:58.408251 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:50:58.408259 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:50:58.408268 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:50:58.408277 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:50:58.408285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:50:58.408292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:50:58.408300 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:50:58.408307 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:50:58.408326 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:50:58.408348 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:50:58.408356 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:50:58.408366 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:50:58.408374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:50:58.408381 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:50:58.408389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:50:58.408397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:50:58.408404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:50:58.408412 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:50:58.408419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:50:58.408427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:50:58.408436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:50:58.408443 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:50:58.408451 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:50:58.408494 systemd-journald[217]: Collecting audit messages is disabled. Mar 17 17:50:58.408515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:50:58.408524 systemd-journald[217]: Journal started Mar 17 17:50:58.408542 systemd-journald[217]: Runtime Journal (/run/log/journal/b30ca8da4d1147e9bedd08f3d2f404df) is 8M, max 78.5M, 70.5M free. Mar 17 17:50:58.406679 systemd-modules-load[219]: Inserted module 'overlay' Mar 17 17:50:58.432924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:58.453774 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:50:58.453827 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:50:58.453841 kernel: Bridge firewalling registered Mar 17 17:50:58.464280 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 17 17:50:58.465238 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:50:58.473194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:50:58.481947 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:50:58.493189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:50:58.507523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:58.545562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:58.562760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:50:58.584620 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:50:58.596495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:50:58.623456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:50:58.635596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:50:58.650733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:50:58.665781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:50:58.693604 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:50:58.710688 dracut-cmdline[251]: dracut-dracut-053 Mar 17 17:50:58.710487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:50:58.742943 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:50:58.723824 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:50:58.793497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:50:58.819660 kernel: SCSI subsystem initialized Mar 17 17:50:58.828409 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:50:58.831000 systemd-resolved[259]: Positive Trust Anchors: Mar 17 17:50:58.850732 kernel: iscsi: registered transport (tcp) Mar 17 17:50:58.836247 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:50:58.836281 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:50:58.923767 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:50:58.923797 kernel: QLogic iSCSI HBA Driver Mar 17 17:50:58.838652 systemd-resolved[259]: Defaulting to hostname 'linux'. Mar 17 17:50:58.839578 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:50:58.860375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:50:58.922476 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:50:58.984373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:50:58.984405 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:50:58.941556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:50:58.998996 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:50:59.045338 kernel: raid6: neonx8 gen() 15786 MB/s Mar 17 17:50:59.065329 kernel: raid6: neonx4 gen() 15821 MB/s Mar 17 17:50:59.085331 kernel: raid6: neonx2 gen() 13255 MB/s Mar 17 17:50:59.107328 kernel: raid6: neonx1 gen() 10497 MB/s Mar 17 17:50:59.127328 kernel: raid6: int64x8 gen() 6796 MB/s Mar 17 17:50:59.147328 kernel: raid6: int64x4 gen() 7335 MB/s Mar 17 17:50:59.169329 kernel: raid6: int64x2 gen() 6112 MB/s Mar 17 17:50:59.193828 kernel: raid6: int64x1 gen() 5061 MB/s Mar 17 17:50:59.193844 kernel: raid6: using algorithm neonx4 gen() 15821 MB/s Mar 17 17:50:59.219950 kernel: raid6: .... xor() 12473 MB/s, rmw enabled Mar 17 17:50:59.219965 kernel: raid6: using neon recovery algorithm Mar 17 17:50:59.232439 kernel: xor: measuring software checksum speed Mar 17 17:50:59.232454 kernel: 8regs : 21658 MB/sec Mar 17 17:50:59.236640 kernel: 32regs : 21658 MB/sec Mar 17 17:50:59.240676 kernel: arm64_neon : 27841 MB/sec Mar 17 17:50:59.245814 kernel: xor: using function: arm64_neon (27841 MB/sec) Mar 17 17:50:59.296356 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:50:59.306468 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:50:59.325502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:50:59.353207 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 17 17:50:59.360338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:50:59.383492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:50:59.405483 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 17 17:50:59.440550 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:50:59.461572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:50:59.496314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:50:59.519490 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:50:59.556151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:50:59.569173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:50:59.595051 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:50:59.635271 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 17:50:59.619708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:50:59.647136 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:50:59.690308 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:50:59.690343 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:50:59.690354 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:50:59.690362 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 17:50:59.690372 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 17 17:50:59.695053 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:50:59.693271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:50:59.762111 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 17 17:50:59.762140 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:50:59.762150 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:50:59.762306 kernel: scsi host0: storvsc_host_t Mar 17 17:50:59.762521 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:50:59.762548 kernel: scsi host1: storvsc_host_t Mar 17 17:50:59.693443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:50:59.737216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:59.798528 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:50:59.751967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:50:59.752357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:59.790456 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:59.829713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:59.841841 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:50:59.516913 kernel: PTP clock support registered Mar 17 17:50:59.519178 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:50:59.519194 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:50:59.519202 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:50:59.519211 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:50:59.519219 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:50:59.519227 systemd-journald[217]: Time jumped backwards, rotating. Mar 17 17:50:59.528918 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:50:59.546954 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:50:59.546978 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:50:59.515015 systemd-resolved[259]: Clock change detected. Flushing caches. Mar 17 17:50:59.560678 kernel: hv_netvsc 000d3a6c-dc99-000d-3a6c-dc99000d3a6c eth0: VF slot 1 added Mar 17 17:50:59.561918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:50:59.562044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:59.595639 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:50:59.589407 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:50:59.619869 kernel: hv_pci c240fcaa-f5fa-4938-8a50-b718ff870284: PCI VMBus probing: Using version 0x10004 Mar 17 17:50:59.769946 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:50:59.770166 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:50:59.770295 kernel: hv_pci c240fcaa-f5fa-4938-8a50-b718ff870284: PCI host bridge to bus f5fa:00 Mar 17 17:50:59.770403 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:50:59.770496 kernel: pci_bus f5fa:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 17:50:59.770608 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:50:59.770699 kernel: pci_bus f5fa:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:50:59.770798 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:50:59.770899 kernel: pci f5fa:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 17:50:59.771065 kernel: pci f5fa:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:50:59.772385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:50:59.772400 kernel: pci f5fa:00:02.0: enabling Extended Tags Mar 17 17:50:59.772532 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:50:59.772628 kernel: pci f5fa:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f5fa:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 17:50:59.772727 kernel: pci_bus f5fa:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:50:59.772812 kernel: pci f5fa:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:50:59.613550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:59.677749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:59.711500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:59.749548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:50:59.816911 kernel: mlx5_core f5fa:00:02.0: enabling device (0000 -> 0002) Mar 17 17:51:00.123311 kernel: mlx5_core f5fa:00:02.0: firmware version: 16.31.2424 Mar 17 17:51:00.123462 kernel: hv_netvsc 000d3a6c-dc99-000d-3a6c-dc99000d3a6c eth0: VF registering: eth1 Mar 17 17:51:00.123705 kernel: mlx5_core f5fa:00:02.0 eth1: joined to eth0 Mar 17 17:51:00.123825 kernel: mlx5_core f5fa:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 17 17:51:00.132305 kernel: mlx5_core f5fa:00:02.0 enP62970s1: renamed from eth1 Mar 17 17:51:00.368744 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:51:00.480797 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:51:00.502319 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (502) Mar 17 17:51:00.517551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:51:00.546138 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Mar 17 17:51:00.564183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:51:00.584838 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:51:00.609525 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:51:00.639271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:01.655429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:01.655705 disk-uuid[605]: The operation has completed successfully. Mar 17 17:51:01.710414 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:51:01.710506 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:51:01.761399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:51:01.775904 sh[691]: Success Mar 17 17:51:01.805273 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:51:02.047565 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:51:02.066411 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:51:02.072619 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:51:02.124273 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:51:02.124326 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:02.124336 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:51:02.132933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:51:02.137799 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:51:02.439783 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:51:02.446231 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:51:02.467503 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:51:02.475509 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:51:02.517449 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:02.517505 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:02.522352 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:02.549868 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:02.566539 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:51:02.573288 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:02.580347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:51:02.590216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:02.619435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:51:02.630455 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:02.671726 systemd-networkd[876]: lo: Link UP Mar 17 17:51:02.671734 systemd-networkd[876]: lo: Gained carrier Mar 17 17:51:02.676588 systemd-networkd[876]: Enumeration completed Mar 17 17:51:02.676796 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:02.680701 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:02.680705 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:02.692322 systemd[1]: Reached target network.target - Network. Mar 17 17:51:02.755286 kernel: mlx5_core f5fa:00:02.0 enP62970s1: Link up Mar 17 17:51:02.838282 kernel: hv_netvsc 000d3a6c-dc99-000d-3a6c-dc99000d3a6c eth0: Data path switched to VF: enP62970s1 Mar 17 17:51:02.837996 systemd-networkd[876]: enP62970s1: Link UP Mar 17 17:51:02.838079 systemd-networkd[876]: eth0: Link UP Mar 17 17:51:02.838166 systemd-networkd[876]: eth0: Gained carrier Mar 17 17:51:02.838174 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:02.863516 systemd-networkd[876]: enP62970s1: Gained carrier Mar 17 17:51:02.877291 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:03.541460 ignition[875]: Ignition 2.20.0 Mar 17 17:51:03.541471 ignition[875]: Stage: fetch-offline Mar 17 17:51:03.545723 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:03.541506 ignition[875]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.541513 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.541606 ignition[875]: parsed url from cmdline: "" Mar 17 17:51:03.541609 ignition[875]: no config URL provided Mar 17 17:51:03.541614 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:03.541621 ignition[875]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:03.579536 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:51:03.541626 ignition[875]: failed to fetch config: resource requires networking Mar 17 17:51:03.541794 ignition[875]: Ignition finished successfully Mar 17 17:51:03.600217 ignition[887]: Ignition 2.20.0 Mar 17 17:51:03.600224 ignition[887]: Stage: fetch Mar 17 17:51:03.601066 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.601077 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.601178 ignition[887]: parsed url from cmdline: "" Mar 17 17:51:03.601184 ignition[887]: no config URL provided Mar 17 17:51:03.601194 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:03.601202 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:03.601230 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:51:03.712992 ignition[887]: GET result: OK Mar 17 17:51:03.713160 ignition[887]: config has been read from IMDS userdata Mar 17 17:51:03.713214 ignition[887]: parsing config with SHA512: b0ea304847c77844060331966254ceac6b6a8e6d062136ff7d1309c587e31687064d7d05c64787a266d74629acc34b8d7b952c08929b7f5bdb9ebc4b713279b9 Mar 17 17:51:03.718352 unknown[887]: fetched base config from "system" Mar 17 17:51:03.718782 ignition[887]: fetch: fetch complete Mar 17 17:51:03.718360 unknown[887]: fetched base config from "system" Mar 17 17:51:03.718786 ignition[887]: fetch: fetch passed Mar 17 17:51:03.718365 unknown[887]: fetched user config from "azure" Mar 17 17:51:03.718830 ignition[887]: Ignition finished successfully Mar 17 17:51:03.722632 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:51:03.741962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:51:03.772737 ignition[893]: Ignition 2.20.0 Mar 17 17:51:03.772753 ignition[893]: Stage: kargs Mar 17 17:51:03.777699 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:51:03.772941 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.772956 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.774234 ignition[893]: kargs: kargs passed Mar 17 17:51:03.774434 ignition[893]: Ignition finished successfully Mar 17 17:51:03.806488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:51:03.829704 ignition[899]: Ignition 2.20.0 Mar 17 17:51:03.829718 ignition[899]: Stage: disks Mar 17 17:51:03.829882 ignition[899]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.836658 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:51:03.829892 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.847398 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:03.830831 ignition[899]: disks: disks passed Mar 17 17:51:03.859657 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:51:03.830876 ignition[899]: Ignition finished successfully Mar 17 17:51:03.873368 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:03.887794 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:03.897601 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:03.926449 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:51:04.013432 systemd-networkd[876]: eth0: Gained IPv6LL Mar 17 17:51:04.034314 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:51:04.043936 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:51:04.061459 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:51:04.124184 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:51:04.124577 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:51:04.134035 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:04.183356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:04.191401 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:51:04.205808 systemd-networkd[876]: enP62970s1: Gained IPv6LL Mar 17 17:51:04.206522 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:51:04.252336 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Mar 17 17:51:04.252358 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:04.217410 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:51:04.284463 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:04.284487 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:04.217451 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:04.237574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:51:04.309559 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:04.285518 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:51:04.309557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:04.790160 coreos-metadata[920]: Mar 17 17:51:04.790 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:51:04.799184 coreos-metadata[920]: Mar 17 17:51:04.798 INFO Fetch successful Mar 17 17:51:04.799184 coreos-metadata[920]: Mar 17 17:51:04.799 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:51:04.816840 coreos-metadata[920]: Mar 17 17:51:04.816 INFO Fetch successful Mar 17 17:51:04.832068 coreos-metadata[920]: Mar 17 17:51:04.830 INFO wrote hostname ci-4230.1.0-a-3f2b416a0a to /sysroot/etc/hostname Mar 17 17:51:04.841570 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:05.061613 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:51:05.087189 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:51:05.097383 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:51:05.124299 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:51:06.006986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:06.025421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:51:06.036442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:51:06.058342 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:06.053711 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:51:06.081220 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:51:06.093707 ignition[1037]: INFO : Ignition 2.20.0 Mar 17 17:51:06.093707 ignition[1037]: INFO : Stage: mount Mar 17 17:51:06.093707 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:06.093707 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:06.093707 ignition[1037]: INFO : mount: mount passed Mar 17 17:51:06.126375 ignition[1037]: INFO : Ignition finished successfully Mar 17 17:51:06.095038 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:51:06.126418 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:51:06.147479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:06.177295 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Mar 17 17:51:06.192426 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:06.192482 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:06.197142 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:06.204273 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:06.206008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:06.233212 ignition[1066]: INFO : Ignition 2.20.0 Mar 17 17:51:06.233212 ignition[1066]: INFO : Stage: files Mar 17 17:51:06.242941 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:06.242941 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:06.242941 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:51:06.267180 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:51:06.267180 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:51:06.350722 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:51:06.359030 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:51:06.359030 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:51:06.351794 unknown[1066]: wrote ssh authorized keys file for user: core Mar 17 17:51:06.381995 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:51:06.381995 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:51:06.451346 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:51:06.794324 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:51:06.805901 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:51:06.805901 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:51:07.232944 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:51:07.322619 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:51:07.333195 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:51:07.700584 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:51:07.933675 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:51:07.933675 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:07.958919 ignition[1066]: INFO : files: files passed Mar 17 17:51:07.958919 ignition[1066]: INFO : Ignition finished successfully Mar 17 17:51:07.958571 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:51:08.017706 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:51:08.028478 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:51:08.055714 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:51:08.112305 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:08.112305 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:08.055809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:51:08.143773 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:08.090617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:08.107211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:51:08.153498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:51:08.191621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:51:08.191739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:51:08.204903 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:51:08.218173 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:51:08.230399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:51:08.246484 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:51:08.271585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:08.289507 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:51:08.310506 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:08.318197 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:08.332183 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:51:08.344748 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:51:08.344823 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:08.362429 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:51:08.369025 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:51:08.381793 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:51:08.394264 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:08.406217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:08.419457 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:51:08.432823 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:51:08.446664 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:51:08.458913 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:51:08.472346 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:51:08.482846 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:51:08.482932 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:51:08.498913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:08.505807 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:08.518437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:51:08.518481 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:08.531684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:51:08.531763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:51:08.550893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:51:08.550943 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:08.558700 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:51:08.558741 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:51:08.640004 ignition[1119]: INFO : Ignition 2.20.0 Mar 17 17:51:08.640004 ignition[1119]: INFO : Stage: umount Mar 17 17:51:08.640004 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:08.640004 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:08.640004 ignition[1119]: INFO : umount: umount passed Mar 17 17:51:08.640004 ignition[1119]: INFO : Ignition finished successfully Mar 17 17:51:08.570518 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:51:08.570561 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:08.604433 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:51:08.624776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:51:08.624854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:08.635408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:51:08.645463 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:51:08.645538 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:08.656696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:51:08.656749 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:51:08.670598 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:51:08.672860 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:51:08.694966 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:51:08.695135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:51:08.705705 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:51:08.705764 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:51:08.717434 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:51:08.717485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:51:08.730664 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:51:08.730719 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:51:08.737645 systemd[1]: Stopped target network.target - Network. Mar 17 17:51:08.752061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:51:08.752129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:08.773039 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:51:08.779198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:51:08.785814 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:08.793381 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:51:08.804461 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:51:08.815844 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:51:08.815895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:51:08.834875 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:51:08.834911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:51:08.847354 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:51:08.847406 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:51:08.862892 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:51:08.862944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:51:08.877509 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:51:08.889221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:51:08.908585 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:51:08.909876 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:51:08.910048 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:51:09.178413 kernel: hv_netvsc 000d3a6c-dc99-000d-3a6c-dc99000d3a6c eth0: Data path switched from VF: enP62970s1 Mar 17 17:51:08.928339 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:51:08.928618 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:51:08.928729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:51:08.946142 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:51:08.947284 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:51:08.947342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:08.977449 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:51:08.983551 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:51:08.983616 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:08.990998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:51:08.991044 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:09.009476 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:51:09.009522 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:09.016156 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:51:09.016200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:09.039028 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:09.050746 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:51:09.050815 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:09.083345 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:51:09.083491 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:09.101801 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:51:09.101842 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:09.113746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:51:09.113787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:09.126129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:51:09.126190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:51:09.144883 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:51:09.144946 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:51:09.172302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:51:09.172370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:09.215480 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:51:09.229381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:51:09.229454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:09.248916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:09.248968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:09.263209 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:51:09.263314 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:09.508536 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 17 17:51:09.263581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:51:09.263678 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:51:09.320191 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:51:09.320548 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:51:09.333408 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:51:09.333494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:51:09.339863 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:51:09.352802 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:51:09.352890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:09.387427 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:51:09.407234 systemd[1]: Switching root. Mar 17 17:51:09.571880 systemd-journald[217]: Journal stopped Mar 17 17:51:13.632997 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:51:13.633020 kernel: SELinux: policy capability open_perms=1 Mar 17 17:51:13.633032 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:51:13.633040 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:51:13.633049 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:51:13.633057 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:51:13.633066 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:51:13.633073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:51:13.633081 kernel: audit: type=1403 audit(1742233870.430:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:51:13.633091 systemd[1]: Successfully loaded SELinux policy in 144.244ms. Mar 17 17:51:13.633102 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.738ms. Mar 17 17:51:13.633112 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:51:13.633120 systemd[1]: Detected virtualization microsoft. Mar 17 17:51:13.633129 systemd[1]: Detected architecture arm64. Mar 17 17:51:13.633137 systemd[1]: Detected first boot. Mar 17 17:51:13.633148 systemd[1]: Hostname set to . Mar 17 17:51:13.633157 systemd[1]: Initializing machine ID from random generator. Mar 17 17:51:13.633166 zram_generator::config[1162]: No configuration found. Mar 17 17:51:13.633175 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:51:13.633183 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:51:13.633192 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:51:13.633201 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:51:13.633211 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:51:13.633221 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:13.633230 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:51:13.633239 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:51:13.633248 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:51:13.633373 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:51:13.633383 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:51:13.633394 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:51:13.633404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:51:13.633413 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:51:13.633422 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:13.633431 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:13.633440 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:51:13.633449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:51:13.633458 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:51:13.633469 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:51:13.633478 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:51:13.633487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:13.633498 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:51:13.633507 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:51:13.633516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:13.633525 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:51:13.633534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:13.633545 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:51:13.633554 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:51:13.633563 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:51:13.633572 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:51:13.633581 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:51:13.633590 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:51:13.633601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:13.633611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:13.633620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:13.633629 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:51:13.633638 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:51:13.633649 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:51:13.633658 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:51:13.633669 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:51:13.633678 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:51:13.633687 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:51:13.633697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:51:13.633706 systemd[1]: Reached target machines.target - Containers. Mar 17 17:51:13.633716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:51:13.633725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:13.633735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:51:13.633745 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:51:13.633754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:13.633764 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:13.633773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:13.633782 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:51:13.633791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:13.633801 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:51:13.633810 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:51:13.633821 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:51:13.633830 kernel: fuse: init (API version 7.39) Mar 17 17:51:13.633839 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:51:13.633849 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:51:13.633858 kernel: ACPI: bus type drm_connector registered Mar 17 17:51:13.633866 kernel: loop: module loaded Mar 17 17:51:13.633875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:13.633884 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:51:13.633893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:51:13.633904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:51:13.633913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:51:13.633943 systemd-journald[1266]: Collecting audit messages is disabled. Mar 17 17:51:13.633964 systemd-journald[1266]: Journal started Mar 17 17:51:13.633991 systemd-journald[1266]: Runtime Journal (/run/log/journal/bd5e96aa73d1483e94ce9b0a46e234b7) is 8M, max 78.5M, 70.5M free. Mar 17 17:51:13.639300 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:51:12.635852 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:51:12.643186 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:51:12.643558 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:51:12.643870 systemd[1]: systemd-journald.service: Consumed 3.621s CPU time. Mar 17 17:51:13.673871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:51:13.684392 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:51:13.684466 systemd[1]: Stopped verity-setup.service. Mar 17 17:51:13.706926 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:51:13.707772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:51:13.714473 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:51:13.721933 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:51:13.729735 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:51:13.736567 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:51:13.743503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:51:13.749118 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:51:13.756437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:13.764383 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:51:13.764555 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:51:13.772302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:13.772455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:13.779716 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:13.779867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:13.786445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:13.786593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:13.793948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:51:13.794095 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:51:13.800940 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:13.801096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:13.808175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:13.815460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:51:13.823940 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:51:13.832118 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:51:13.840955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:13.857665 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:51:13.869453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:51:13.877358 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:51:13.884384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:51:13.884423 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:13.891683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:51:13.902399 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:51:13.910413 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:51:13.919710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:13.940399 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:51:13.948021 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:51:13.954835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:13.956411 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:51:13.963135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:13.966481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:13.977483 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:51:13.993755 systemd-journald[1266]: Time spent on flushing to /var/log/journal/bd5e96aa73d1483e94ce9b0a46e234b7 is 73.805ms for 914 entries. Mar 17 17:51:13.993755 systemd-journald[1266]: System Journal (/var/log/journal/bd5e96aa73d1483e94ce9b0a46e234b7) is 11.8M, max 2.6G, 2.6G free. Mar 17 17:51:14.146051 systemd-journald[1266]: Received client request to flush runtime journal. Mar 17 17:51:14.146106 systemd-journald[1266]: /var/log/journal/bd5e96aa73d1483e94ce9b0a46e234b7/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 17 17:51:14.146135 systemd-journald[1266]: Rotating system journal. Mar 17 17:51:14.146156 kernel: loop0: detected capacity change from 0 to 28720 Mar 17 17:51:13.995461 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:51:14.012514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:51:14.022469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:51:14.049308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:51:14.058505 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:51:14.075047 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:51:14.082808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:14.098825 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:51:14.113544 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:51:14.122024 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:51:14.147121 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:51:14.180298 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:51:14.181625 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:51:14.267372 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:51:14.283448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:51:14.357144 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 17 17:51:14.357164 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 17 17:51:14.361271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:14.463279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:51:14.530427 kernel: loop1: detected capacity change from 0 to 113512 Mar 17 17:51:14.843283 kernel: loop2: detected capacity change from 0 to 123192 Mar 17 17:51:15.139827 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:51:15.153417 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:15.176721 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Mar 17 17:51:15.222287 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:51:15.252277 kernel: loop4: detected capacity change from 0 to 28720 Mar 17 17:51:15.263299 kernel: loop5: detected capacity change from 0 to 113512 Mar 17 17:51:15.275274 kernel: loop6: detected capacity change from 0 to 123192 Mar 17 17:51:15.285270 kernel: loop7: detected capacity change from 0 to 194096 Mar 17 17:51:15.290180 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 17 17:51:15.290635 (sd-merge)[1330]: Merged extensions into '/usr'. Mar 17 17:51:15.294027 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:51:15.294042 systemd[1]: Reloading... Mar 17 17:51:15.426277 zram_generator::config[1364]: No configuration found. Mar 17 17:51:15.484900 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:51:15.538348 kernel: hv_vmbus: registering driver hv_balloon Mar 17 17:51:15.546351 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 17:51:15.546433 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 17:51:15.546467 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 17:51:15.563377 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 17:51:15.571486 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 17:51:15.578346 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:51:15.584281 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:51:15.629638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:15.676480 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1371) Mar 17 17:51:15.755600 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:51:15.755792 systemd[1]: Reloading finished in 461 ms. Mar 17 17:51:15.773234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:15.782280 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:51:15.813636 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:51:15.836569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:51:15.854412 systemd[1]: Starting ensure-sysext.service... Mar 17 17:51:15.861458 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:51:15.869906 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:51:15.880448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:15.898386 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:51:15.909516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:15.919741 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:51:15.919946 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:51:15.922413 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:51:15.922765 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Mar 17 17:51:15.922884 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Mar 17 17:51:15.924867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:51:15.926552 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:15.926670 systemd-tmpfiles[1516]: Skipping /boot Mar 17 17:51:15.935136 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:15.935635 systemd-tmpfiles[1516]: Skipping /boot Mar 17 17:51:15.938553 lvm[1513]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:15.940482 systemd[1]: Reload requested from client PID 1512 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:51:15.940502 systemd[1]: Reloading... Mar 17 17:51:16.016348 zram_generator::config[1552]: No configuration found. Mar 17 17:51:16.145841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:16.260511 systemd[1]: Reloading finished in 319 ms. Mar 17 17:51:16.287377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:51:16.295630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:16.303914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:16.317994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:16.329479 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:51:16.336217 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:51:16.345532 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:51:16.356540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:51:16.357693 lvm[1615]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:16.369530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:51:16.378934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:51:16.390151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:51:16.409165 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:51:16.434821 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:51:16.444234 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:51:16.457914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:16.463649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:16.472859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:16.484406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:16.494013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:16.494166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:16.498949 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:51:16.511826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:16.513367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:16.522162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:16.522349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:16.528772 augenrules[1652]: No rules Mar 17 17:51:16.530680 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:51:16.530879 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:51:16.537265 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:16.537420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:16.558587 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:51:16.566703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:16.572601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:16.589102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:16.599632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:16.610531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:16.619702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:16.619842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:16.619991 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:51:16.627364 augenrules[1661]: /sbin/augenrules: No change Mar 17 17:51:16.628653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:16.628867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:16.638690 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:16.638928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:16.647543 augenrules[1682]: No rules Mar 17 17:51:16.648247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:16.649174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:16.657536 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:51:16.657716 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:51:16.665163 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:16.665341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:16.676209 systemd[1]: Finished ensure-sysext.service. Mar 17 17:51:16.685599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:16.685670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:16.718068 systemd-resolved[1618]: Positive Trust Anchors: Mar 17 17:51:16.718091 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:51:16.718123 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:51:16.737863 systemd-resolved[1618]: Using system hostname 'ci-4230.1.0-a-3f2b416a0a'. Mar 17 17:51:16.739417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:51:16.746096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:16.795323 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:51:16.803498 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:51:16.841742 systemd-networkd[1515]: lo: Link UP Mar 17 17:51:16.841754 systemd-networkd[1515]: lo: Gained carrier Mar 17 17:51:16.843768 systemd-networkd[1515]: Enumeration completed Mar 17 17:51:16.843871 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:16.849617 systemd-networkd[1515]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:16.849623 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:16.850474 systemd[1]: Reached target network.target - Network. Mar 17 17:51:16.863410 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:51:16.871303 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:51:16.914273 kernel: mlx5_core f5fa:00:02.0 enP62970s1: Link up Mar 17 17:51:16.974323 kernel: hv_netvsc 000d3a6c-dc99-000d-3a6c-dc99000d3a6c eth0: Data path switched to VF: enP62970s1 Mar 17 17:51:16.976180 systemd-networkd[1515]: enP62970s1: Link UP Mar 17 17:51:16.976299 systemd-networkd[1515]: eth0: Link UP Mar 17 17:51:16.976302 systemd-networkd[1515]: eth0: Gained carrier Mar 17 17:51:16.976318 systemd-networkd[1515]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:16.978169 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:51:16.988687 systemd-networkd[1515]: enP62970s1: Gained carrier Mar 17 17:51:16.992288 systemd-networkd[1515]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:18.477381 systemd-networkd[1515]: enP62970s1: Gained IPv6LL Mar 17 17:51:18.605409 systemd-networkd[1515]: eth0: Gained IPv6LL Mar 17 17:51:18.608286 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:51:18.616288 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:51:19.289360 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:51:19.306388 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:51:19.319396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:51:19.326653 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:51:19.333493 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:19.339574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:51:19.346718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:51:19.354474 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:51:19.360758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:51:19.368497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:51:19.376147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:51:19.376186 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:51:19.381511 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:51:19.387713 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:51:19.395886 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:51:19.403461 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:51:19.411084 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:51:19.418202 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:51:19.432857 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:51:19.439032 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:51:19.446657 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:51:19.452882 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:51:19.458768 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:19.464286 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:19.464312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:19.475348 systemd[1]: Starting chronyd.service - NTP client/server... Mar 17 17:51:19.484376 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:51:19.496418 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:51:19.510914 (chronyd)[1704]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 17 17:51:19.517304 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:51:19.525439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:51:19.535022 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:51:19.548213 chronyd[1714]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 17 17:51:19.548482 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:51:19.548521 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 17 17:51:19.550463 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 17 17:51:19.556841 jq[1711]: false Mar 17 17:51:19.557586 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 17 17:51:19.559093 KVP[1716]: KVP starting; pid is:1716 Mar 17 17:51:19.559450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:19.574285 kernel: hv_utils: KVP IC version 4.0 Mar 17 17:51:19.574374 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:51:19.580146 KVP[1716]: KVP LIC Version: 3.1 Mar 17 17:51:19.586552 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:51:19.598380 extend-filesystems[1712]: Found loop4 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found loop5 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found loop6 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found loop7 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda1 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda2 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda3 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found usr Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda4 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda6 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda7 Mar 17 17:51:19.598380 extend-filesystems[1712]: Found sda9 Mar 17 17:51:19.598380 extend-filesystems[1712]: Checking size of /dev/sda9 Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.723 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.723 INFO Fetch successful Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.723 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.724 INFO Fetch successful Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.730 INFO Fetching http://168.63.129.16/machine/6b40fa52-bfe7-4969-a64b-8c53127f42bf/71eb08ba%2De4d0%2D4e5e%2Db24a%2D2b786274666b.%5Fci%2D4230.1.0%2Da%2D3f2b416a0a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.730 INFO Fetch successful Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.730 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:51:19.808784 coreos-metadata[1706]: Mar 17 17:51:19.746 INFO Fetch successful Mar 17 17:51:19.809032 extend-filesystems[1712]: Old size kept for /dev/sda9 Mar 17 17:51:19.809032 extend-filesystems[1712]: Found sr0 Mar 17 17:51:19.599414 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:51:19.832663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1758) Mar 17 17:51:19.605731 chronyd[1714]: Timezone right/UTC failed leap second check, ignoring Mar 17 17:51:19.621790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:51:19.605917 chronyd[1714]: Loaded seccomp filter (level 2) Mar 17 17:51:19.633447 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:51:19.609103 dbus-daemon[1707]: [system] SELinux support is enabled Mar 17 17:51:19.657448 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:51:19.667206 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:51:19.838142 update_engine[1742]: I20250317 17:51:19.797401 1742 main.cc:92] Flatcar Update Engine starting Mar 17 17:51:19.838142 update_engine[1742]: I20250317 17:51:19.817874 1742 update_check_scheduler.cc:74] Next update check in 9m9s Mar 17 17:51:19.667756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:51:19.839613 jq[1745]: true Mar 17 17:51:19.678430 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:51:19.695381 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:51:19.707127 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:51:19.720984 systemd[1]: Started chronyd.service - NTP client/server. Mar 17 17:51:19.738620 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:51:19.738842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:51:19.739120 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:51:19.739290 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:51:19.762755 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:51:19.762975 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:51:19.773841 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:51:19.824721 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:51:19.824930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:51:19.861750 systemd-logind[1736]: New seat seat0. Mar 17 17:51:19.863550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:51:19.863590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:51:19.878803 (ntainerd)[1770]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:51:19.880616 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:51:19.880889 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:51:19.889641 jq[1769]: true Mar 17 17:51:19.881041 systemd-logind[1736]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 17 17:51:19.894398 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:51:19.916715 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:51:19.930893 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:51:19.938358 tar[1765]: linux-arm64/helm Mar 17 17:51:19.945325 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:51:19.953667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:51:20.115307 bash[1845]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:51:20.116020 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:51:20.141212 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:51:20.219471 locksmithd[1818]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:51:20.485999 tar[1765]: linux-arm64/LICENSE Mar 17 17:51:20.486370 tar[1765]: linux-arm64/README.md Mar 17 17:51:20.497336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:51:20.586959 containerd[1770]: time="2025-03-17T17:51:20.586873520Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:51:20.645282 containerd[1770]: time="2025-03-17T17:51:20.644495280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.648896 containerd[1770]: time="2025-03-17T17:51:20.648856080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:20.649112 containerd[1770]: time="2025-03-17T17:51:20.649093360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:51:20.649181 containerd[1770]: time="2025-03-17T17:51:20.649168360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:51:20.650007 containerd[1770]: time="2025-03-17T17:51:20.649985920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:51:20.650111 containerd[1770]: time="2025-03-17T17:51:20.650094960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.650287 containerd[1770]: time="2025-03-17T17:51:20.650268120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:20.650378 containerd[1770]: time="2025-03-17T17:51:20.650363360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.651118 containerd[1770]: time="2025-03-17T17:51:20.651053960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:20.651216 containerd[1770]: time="2025-03-17T17:51:20.651201040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.652231 containerd[1770]: time="2025-03-17T17:51:20.651925360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:20.652231 containerd[1770]: time="2025-03-17T17:51:20.651964200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.652231 containerd[1770]: time="2025-03-17T17:51:20.652096720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.652502 containerd[1770]: time="2025-03-17T17:51:20.652481680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:20.653309 containerd[1770]: time="2025-03-17T17:51:20.652721040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:20.653402 containerd[1770]: time="2025-03-17T17:51:20.653383880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:51:20.653586 containerd[1770]: time="2025-03-17T17:51:20.653550720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:51:20.653724 containerd[1770]: time="2025-03-17T17:51:20.653699320Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:51:20.668633 containerd[1770]: time="2025-03-17T17:51:20.668591600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:51:20.669480 containerd[1770]: time="2025-03-17T17:51:20.669456680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:51:20.669575 containerd[1770]: time="2025-03-17T17:51:20.669560840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:51:20.669636 containerd[1770]: time="2025-03-17T17:51:20.669623440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:51:20.669719 containerd[1770]: time="2025-03-17T17:51:20.669706120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.670551640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.670830440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.670940880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.670962400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.670988760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671003640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671015960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671028200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671041320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671056080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671068920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671080520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671092320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:51:20.672961 containerd[1770]: time="2025-03-17T17:51:20.671112720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671127200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671140400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671153200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671166040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671178080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671189360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671201840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671217040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671231400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671242440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671282760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671298800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671315520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671337560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673331 containerd[1770]: time="2025-03-17T17:51:20.671351240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671362560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671430560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671450920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671460960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671473360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671483200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671496240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671506560Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:51:20.673576 containerd[1770]: time="2025-03-17T17:51:20.671518200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:51:20.673722 containerd[1770]: time="2025-03-17T17:51:20.671798520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:51:20.673722 containerd[1770]: time="2025-03-17T17:51:20.671844000Z" level=info msg="Connect containerd service" Mar 17 17:51:20.673722 containerd[1770]: time="2025-03-17T17:51:20.671873200Z" level=info msg="using legacy CRI server" Mar 17 17:51:20.673722 containerd[1770]: time="2025-03-17T17:51:20.671879480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:51:20.673722 containerd[1770]: time="2025-03-17T17:51:20.671990160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:51:20.676412 containerd[1770]: time="2025-03-17T17:51:20.676384080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:51:20.676639 containerd[1770]: time="2025-03-17T17:51:20.676592600Z" level=info msg="Start subscribing containerd event" Mar 17 17:51:20.676674 containerd[1770]: time="2025-03-17T17:51:20.676652040Z" level=info msg="Start recovering state" Mar 17 17:51:20.676743 containerd[1770]: time="2025-03-17T17:51:20.676724880Z" level=info msg="Start event monitor" Mar 17 17:51:20.676743 containerd[1770]: time="2025-03-17T17:51:20.676740520Z" level=info msg="Start snapshots syncer" Mar 17 17:51:20.676797 containerd[1770]: time="2025-03-17T17:51:20.676751360Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:51:20.676797 containerd[1770]: time="2025-03-17T17:51:20.676758840Z" level=info msg="Start streaming server" Mar 17 17:51:20.677653 containerd[1770]: time="2025-03-17T17:51:20.677629920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:51:20.678562 containerd[1770]: time="2025-03-17T17:51:20.678541040Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:51:20.678771 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:51:20.687467 containerd[1770]: time="2025-03-17T17:51:20.687429960Z" level=info msg="containerd successfully booted in 0.104207s" Mar 17 17:51:20.693018 sshd_keygen[1739]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:51:20.713620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:51:20.731489 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:51:20.739393 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 17 17:51:20.748490 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:51:20.749042 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:51:20.762835 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:51:20.772570 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 17 17:51:20.799931 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:51:20.812037 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:51:20.818951 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:51:20.826428 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:51:20.844243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:20.851381 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:20.851405 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:51:20.861741 systemd[1]: Startup finished in 723ms (kernel) + 12.877s (initrd) + 10.573s (userspace) = 24.175s. Mar 17 17:51:21.294736 login[1893]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:21.295831 login[1894]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:21.303575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:51:21.308500 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:51:21.322640 kubelet[1900]: E0317 17:51:21.320362 1900 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:21.321918 systemd-logind[1736]: New session 1 of user core. Mar 17 17:51:21.325347 systemd-logind[1736]: New session 2 of user core. Mar 17 17:51:21.328948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:21.329097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:21.329778 systemd[1]: kubelet.service: Consumed 706ms CPU time, 242.3M memory peak. Mar 17 17:51:21.335414 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:51:21.341504 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:51:21.344743 (systemd)[1914]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:51:21.346980 systemd-logind[1736]: New session c1 of user core. Mar 17 17:51:21.490718 systemd[1914]: Queued start job for default target default.target. Mar 17 17:51:21.501134 systemd[1914]: Created slice app.slice - User Application Slice. Mar 17 17:51:21.501165 systemd[1914]: Reached target paths.target - Paths. Mar 17 17:51:21.501201 systemd[1914]: Reached target timers.target - Timers. Mar 17 17:51:21.502388 systemd[1914]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:51:21.511658 systemd[1914]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:51:21.511719 systemd[1914]: Reached target sockets.target - Sockets. Mar 17 17:51:21.511758 systemd[1914]: Reached target basic.target - Basic System. Mar 17 17:51:21.511788 systemd[1914]: Reached target default.target - Main User Target. Mar 17 17:51:21.511812 systemd[1914]: Startup finished in 158ms. Mar 17 17:51:21.512179 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:51:21.514408 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:51:21.515732 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:51:22.542540 waagent[1890]: 2025-03-17T17:51:22.542442Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 17 17:51:22.548937 waagent[1890]: 2025-03-17T17:51:22.548869Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 17 17:51:22.553696 waagent[1890]: 2025-03-17T17:51:22.553644Z INFO Daemon Daemon Python: 3.11.11 Mar 17 17:51:22.560291 waagent[1890]: 2025-03-17T17:51:22.559332Z INFO Daemon Daemon Run daemon Mar 17 17:51:22.563680 waagent[1890]: 2025-03-17T17:51:22.563632Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 17 17:51:22.573013 waagent[1890]: 2025-03-17T17:51:22.572950Z INFO Daemon Daemon Using waagent for provisioning Mar 17 17:51:22.578835 waagent[1890]: 2025-03-17T17:51:22.578789Z INFO Daemon Daemon Activate resource disk Mar 17 17:51:22.583699 waagent[1890]: 2025-03-17T17:51:22.583657Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 17:51:22.596814 waagent[1890]: 2025-03-17T17:51:22.596758Z INFO Daemon Daemon Found device: None Mar 17 17:51:22.601369 waagent[1890]: 2025-03-17T17:51:22.601326Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 17:51:22.610629 waagent[1890]: 2025-03-17T17:51:22.610585Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 17:51:22.623010 waagent[1890]: 2025-03-17T17:51:22.622963Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:51:22.629040 waagent[1890]: 2025-03-17T17:51:22.628996Z INFO Daemon Daemon Running default provisioning handler Mar 17 17:51:22.640370 waagent[1890]: 2025-03-17T17:51:22.640301Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 17 17:51:22.655403 waagent[1890]: 2025-03-17T17:51:22.655335Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 17:51:22.665719 waagent[1890]: 2025-03-17T17:51:22.665660Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 17:51:22.671324 waagent[1890]: 2025-03-17T17:51:22.671268Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 17:51:22.755059 waagent[1890]: 2025-03-17T17:51:22.754195Z INFO Daemon Daemon Successfully mounted dvd Mar 17 17:51:22.769134 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 17:51:22.770952 waagent[1890]: 2025-03-17T17:51:22.770884Z INFO Daemon Daemon Detect protocol endpoint Mar 17 17:51:22.776274 waagent[1890]: 2025-03-17T17:51:22.776211Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:51:22.782505 waagent[1890]: 2025-03-17T17:51:22.782452Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 17:51:22.789447 waagent[1890]: 2025-03-17T17:51:22.789400Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 17:51:22.798319 waagent[1890]: 2025-03-17T17:51:22.795138Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 17:51:22.800657 waagent[1890]: 2025-03-17T17:51:22.800611Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 17:51:22.835618 waagent[1890]: 2025-03-17T17:51:22.835571Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 17:51:22.842775 waagent[1890]: 2025-03-17T17:51:22.842748Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 17:51:22.848338 waagent[1890]: 2025-03-17T17:51:22.848299Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 17:51:23.082608 waagent[1890]: 2025-03-17T17:51:23.082449Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 17:51:23.089654 waagent[1890]: 2025-03-17T17:51:23.089588Z INFO Daemon Daemon Forcing an update of the goal state. Mar 17 17:51:23.099322 waagent[1890]: 2025-03-17T17:51:23.099274Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:51:23.143243 waagent[1890]: 2025-03-17T17:51:23.143194Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 17 17:51:23.149509 waagent[1890]: 2025-03-17T17:51:23.149465Z INFO Daemon Mar 17 17:51:23.152573 waagent[1890]: 2025-03-17T17:51:23.152531Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: fa8a7f5b-195e-4da3-b3c5-91ec30024eb6 eTag: 12765720943738268044 source: Fabric] Mar 17 17:51:23.165442 waagent[1890]: 2025-03-17T17:51:23.165394Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 17 17:51:23.173060 waagent[1890]: 2025-03-17T17:51:23.173010Z INFO Daemon Mar 17 17:51:23.176308 waagent[1890]: 2025-03-17T17:51:23.176262Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:51:23.188008 waagent[1890]: 2025-03-17T17:51:23.187972Z INFO Daemon Daemon Downloading artifacts profile blob Mar 17 17:51:23.278306 waagent[1890]: 2025-03-17T17:51:23.278204Z INFO Daemon Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:51:23.289074 waagent[1890]: 2025-03-17T17:51:23.289022Z INFO Daemon Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:51:23.299542 waagent[1890]: 2025-03-17T17:51:23.299491Z INFO Daemon Fetch goal state completed Mar 17 17:51:23.311987 waagent[1890]: 2025-03-17T17:51:23.311922Z INFO Daemon Daemon Starting provisioning Mar 17 17:51:23.317316 waagent[1890]: 2025-03-17T17:51:23.317271Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 17:51:23.322219 waagent[1890]: 2025-03-17T17:51:23.322180Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-3f2b416a0a] Mar 17 17:51:23.345807 waagent[1890]: 2025-03-17T17:51:23.345733Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-3f2b416a0a] Mar 17 17:51:23.353040 waagent[1890]: 2025-03-17T17:51:23.352972Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 17:51:23.360028 waagent[1890]: 2025-03-17T17:51:23.359968Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 17:51:23.372794 systemd-networkd[1515]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:23.373407 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:23.373466 systemd-networkd[1515]: eth0: DHCP lease lost Mar 17 17:51:23.374382 waagent[1890]: 2025-03-17T17:51:23.373987Z INFO Daemon Daemon Create user account if not exists Mar 17 17:51:23.380363 waagent[1890]: 2025-03-17T17:51:23.380305Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 17:51:23.386435 waagent[1890]: 2025-03-17T17:51:23.386354Z INFO Daemon Daemon Configure sudoer Mar 17 17:51:23.391526 waagent[1890]: 2025-03-17T17:51:23.391464Z INFO Daemon Daemon Configure sshd Mar 17 17:51:23.396574 waagent[1890]: 2025-03-17T17:51:23.396518Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 17 17:51:23.410921 waagent[1890]: 2025-03-17T17:51:23.410524Z INFO Daemon Daemon Deploy ssh public key. Mar 17 17:51:23.419343 systemd-networkd[1515]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:24.513073 waagent[1890]: 2025-03-17T17:51:24.513001Z INFO Daemon Daemon Provisioning complete Mar 17 17:51:24.531906 waagent[1890]: 2025-03-17T17:51:24.531858Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 17:51:24.538736 waagent[1890]: 2025-03-17T17:51:24.538687Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 17:51:24.548993 waagent[1890]: 2025-03-17T17:51:24.548944Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 17 17:51:24.679911 waagent[1970]: 2025-03-17T17:51:24.679832Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 17 17:51:24.680759 waagent[1970]: 2025-03-17T17:51:24.680356Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 17 17:51:24.680759 waagent[1970]: 2025-03-17T17:51:24.680433Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 17 17:51:24.732954 waagent[1970]: 2025-03-17T17:51:24.732862Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 17:51:24.733152 waagent[1970]: 2025-03-17T17:51:24.733110Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:24.733218 waagent[1970]: 2025-03-17T17:51:24.733187Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:24.741163 waagent[1970]: 2025-03-17T17:51:24.741099Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:51:24.746861 waagent[1970]: 2025-03-17T17:51:24.746819Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 17:51:24.747361 waagent[1970]: 2025-03-17T17:51:24.747316Z INFO ExtHandler Mar 17 17:51:24.747435 waagent[1970]: 2025-03-17T17:51:24.747404Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8abb8367-2728-4b52-960e-89327116a36b eTag: 12765720943738268044 source: Fabric] Mar 17 17:51:24.747724 waagent[1970]: 2025-03-17T17:51:24.747684Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:51:24.748286 waagent[1970]: 2025-03-17T17:51:24.748224Z INFO ExtHandler Mar 17 17:51:24.748358 waagent[1970]: 2025-03-17T17:51:24.748328Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:51:24.752224 waagent[1970]: 2025-03-17T17:51:24.752190Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:51:24.840435 waagent[1970]: 2025-03-17T17:51:24.840292Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:51:24.840826 waagent[1970]: 2025-03-17T17:51:24.840780Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:51:24.841248 waagent[1970]: 2025-03-17T17:51:24.841205Z INFO ExtHandler Fetch goal state completed Mar 17 17:51:24.857782 waagent[1970]: 2025-03-17T17:51:24.857716Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1970 Mar 17 17:51:24.857948 waagent[1970]: 2025-03-17T17:51:24.857912Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 17 17:51:24.859837 waagent[1970]: 2025-03-17T17:51:24.859768Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 17:51:24.860316 waagent[1970]: 2025-03-17T17:51:24.860271Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 17:51:25.268569 waagent[1970]: 2025-03-17T17:51:25.268520Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 17:51:25.268767 waagent[1970]: 2025-03-17T17:51:25.268725Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 17:51:25.274765 waagent[1970]: 2025-03-17T17:51:25.274297Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 17:51:25.280312 systemd[1]: Reload requested from client PID 1985 ('systemctl') (unit waagent.service)... Mar 17 17:51:25.280328 systemd[1]: Reloading... Mar 17 17:51:25.371522 zram_generator::config[2030]: No configuration found. Mar 17 17:51:25.473112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:25.587048 systemd[1]: Reloading finished in 306 ms. Mar 17 17:51:25.606286 waagent[1970]: 2025-03-17T17:51:25.604503Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 17 17:51:25.609456 systemd[1]: Reload requested from client PID 2078 ('systemctl') (unit waagent.service)... Mar 17 17:51:25.609473 systemd[1]: Reloading... Mar 17 17:51:25.707283 zram_generator::config[2132]: No configuration found. Mar 17 17:51:25.797689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:25.910061 systemd[1]: Reloading finished in 300 ms. Mar 17 17:51:25.929322 waagent[1970]: 2025-03-17T17:51:25.928534Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 17 17:51:25.929322 waagent[1970]: 2025-03-17T17:51:25.928699Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 17 17:51:26.209344 waagent[1970]: 2025-03-17T17:51:26.209002Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 17:51:26.209724 waagent[1970]: 2025-03-17T17:51:26.209652Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 17 17:51:26.210500 waagent[1970]: 2025-03-17T17:51:26.210413Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 17:51:26.210987 waagent[1970]: 2025-03-17T17:51:26.210869Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 17:51:26.212049 waagent[1970]: 2025-03-17T17:51:26.211227Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:26.212049 waagent[1970]: 2025-03-17T17:51:26.211348Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:26.212049 waagent[1970]: 2025-03-17T17:51:26.211482Z INFO EnvHandler ExtHandler Configure routes Mar 17 17:51:26.212049 waagent[1970]: 2025-03-17T17:51:26.211543Z INFO EnvHandler ExtHandler Gateway:None Mar 17 17:51:26.212049 waagent[1970]: 2025-03-17T17:51:26.211585Z INFO EnvHandler ExtHandler Routes:None Mar 17 17:51:26.212349 waagent[1970]: 2025-03-17T17:51:26.212297Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:26.212532 waagent[1970]: 2025-03-17T17:51:26.212484Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 17:51:26.212762 waagent[1970]: 2025-03-17T17:51:26.212724Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:26.213052 waagent[1970]: 2025-03-17T17:51:26.213008Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 17:51:26.213344 waagent[1970]: 2025-03-17T17:51:26.213297Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 17:51:26.213344 waagent[1970]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 17:51:26.213344 waagent[1970]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 17:51:26.213344 waagent[1970]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 17:51:26.213344 waagent[1970]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:26.213344 waagent[1970]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:26.213344 waagent[1970]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:26.213956 waagent[1970]: 2025-03-17T17:51:26.213889Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 17:51:26.214276 waagent[1970]: 2025-03-17T17:51:26.214215Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 17:51:26.214703 waagent[1970]: 2025-03-17T17:51:26.214167Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 17:51:26.215172 waagent[1970]: 2025-03-17T17:51:26.214791Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 17:51:26.225287 waagent[1970]: 2025-03-17T17:51:26.225226Z INFO ExtHandler ExtHandler Mar 17 17:51:26.225388 waagent[1970]: 2025-03-17T17:51:26.225350Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ff7aa6be-3063-4f19-b355-cce82fc90b96 correlation 67c987f7-bbfe-4e5b-9960-55a34bb768d2 created: 2025-03-17T17:50:10.236710Z] Mar 17 17:51:26.225783 waagent[1970]: 2025-03-17T17:51:26.225737Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:51:26.226366 waagent[1970]: 2025-03-17T17:51:26.226326Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 17 17:51:26.267818 waagent[1970]: 2025-03-17T17:51:26.267237Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CDD77A01-9666-424F-B63F-977917607427;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 17 17:51:26.273589 waagent[1970]: 2025-03-17T17:51:26.273102Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 17:51:26.273589 waagent[1970]: Executing ['ip', '-a', '-o', 'link']: Mar 17 17:51:26.273589 waagent[1970]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 17:51:26.273589 waagent[1970]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6c:dc:99 brd ff:ff:ff:ff:ff:ff Mar 17 17:51:26.273589 waagent[1970]: 3: enP62970s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6c:dc:99 brd ff:ff:ff:ff:ff:ff\ altname enP62970p0s2 Mar 17 17:51:26.273589 waagent[1970]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 17:51:26.273589 waagent[1970]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 17:51:26.273589 waagent[1970]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 17:51:26.273589 waagent[1970]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 17:51:26.273589 waagent[1970]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 17 17:51:26.273589 waagent[1970]: 2: eth0 inet6 fe80::20d:3aff:fe6c:dc99/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:51:26.273589 waagent[1970]: 3: enP62970s1 inet6 fe80::20d:3aff:fe6c:dc99/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:51:26.329635 waagent[1970]: 2025-03-17T17:51:26.329562Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 17 17:51:26.329635 waagent[1970]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.329635 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.329635 waagent[1970]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.329635 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.329635 waagent[1970]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.329635 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.329635 waagent[1970]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:51:26.329635 waagent[1970]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:51:26.329635 waagent[1970]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:51:26.332381 waagent[1970]: 2025-03-17T17:51:26.332323Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 17:51:26.332381 waagent[1970]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.332381 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.332381 waagent[1970]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.332381 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.332381 waagent[1970]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:26.332381 waagent[1970]: pkts bytes target prot opt in out source destination Mar 17 17:51:26.332381 waagent[1970]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:51:26.332381 waagent[1970]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:51:26.332381 waagent[1970]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:51:26.332616 waagent[1970]: 2025-03-17T17:51:26.332579Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 17:51:31.396495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:31.404436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:31.497904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:31.501482 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:31.589584 kubelet[2210]: E0317 17:51:31.589541 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:31.592373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:31.592500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:31.593137 systemd[1]: kubelet.service: Consumed 123ms CPU time, 94.8M memory peak. Mar 17 17:51:41.646768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:51:41.652445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:41.899461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:41.902496 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:41.937430 kubelet[2226]: E0317 17:51:41.937375 2226 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:41.939270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:41.939396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:41.940008 systemd[1]: kubelet.service: Consumed 118ms CPU time, 95.1M memory peak. Mar 17 17:51:43.402492 chronyd[1714]: Selected source PHC0 Mar 17 17:51:47.011375 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:51:47.021728 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:59336.service - OpenSSH per-connection server daemon (10.200.16.10:59336). Mar 17 17:51:47.603467 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 59336 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:47.604726 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:47.609223 systemd-logind[1736]: New session 3 of user core. Mar 17 17:51:47.615383 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:51:48.050503 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:59350.service - OpenSSH per-connection server daemon (10.200.16.10:59350). Mar 17 17:51:48.538093 sshd[2240]: Accepted publickey for core from 10.200.16.10 port 59350 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:48.539386 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:48.543504 systemd-logind[1736]: New session 4 of user core. Mar 17 17:51:48.551396 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:51:48.893989 sshd[2242]: Connection closed by 10.200.16.10 port 59350 Mar 17 17:51:48.893013 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:48.895597 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:59350.service: Deactivated successfully. Mar 17 17:51:48.897230 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:51:48.898666 systemd-logind[1736]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:51:48.899587 systemd-logind[1736]: Removed session 4. Mar 17 17:51:48.979508 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:38078.service - OpenSSH per-connection server daemon (10.200.16.10:38078). Mar 17 17:51:49.425089 sshd[2248]: Accepted publickey for core from 10.200.16.10 port 38078 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:49.426401 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:49.431740 systemd-logind[1736]: New session 5 of user core. Mar 17 17:51:49.438409 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:51:49.753490 sshd[2250]: Connection closed by 10.200.16.10 port 38078 Mar 17 17:51:49.754158 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:49.757825 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:38078.service: Deactivated successfully. Mar 17 17:51:49.759506 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:51:49.760217 systemd-logind[1736]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:51:49.761093 systemd-logind[1736]: Removed session 5. Mar 17 17:51:49.847490 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:38088.service - OpenSSH per-connection server daemon (10.200.16.10:38088). Mar 17 17:51:50.329658 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 38088 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:50.330909 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:50.336412 systemd-logind[1736]: New session 6 of user core. Mar 17 17:51:50.341425 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:51:50.682047 sshd[2258]: Connection closed by 10.200.16.10 port 38088 Mar 17 17:51:50.682633 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:50.685958 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:38088.service: Deactivated successfully. Mar 17 17:51:50.687613 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:51:50.688348 systemd-logind[1736]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:51:50.689288 systemd-logind[1736]: Removed session 6. Mar 17 17:51:50.773525 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:38096.service - OpenSSH per-connection server daemon (10.200.16.10:38096). Mar 17 17:51:51.218050 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 38096 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:51.219324 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:51.224452 systemd-logind[1736]: New session 7 of user core. Mar 17 17:51:51.232430 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:51:51.617571 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:51:51.617837 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:51:51.654154 sudo[2267]: pam_unix(sudo:session): session closed for user root Mar 17 17:51:51.734949 sshd[2266]: Connection closed by 10.200.16.10 port 38096 Mar 17 17:51:51.734195 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:51.737752 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:38096.service: Deactivated successfully. Mar 17 17:51:51.739340 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:51:51.740044 systemd-logind[1736]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:51:51.741060 systemd-logind[1736]: Removed session 7. Mar 17 17:51:51.837501 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:38104.service - OpenSSH per-connection server daemon (10.200.16.10:38104). Mar 17 17:51:52.146551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:51:52.152429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:52.283435 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 38104 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:52.284395 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:52.288773 systemd-logind[1736]: New session 8 of user core. Mar 17 17:51:52.298410 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:51:52.385821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:52.389424 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:52.427401 kubelet[2284]: E0317 17:51:52.427283 2284 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:52.429948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:52.430096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:52.430585 systemd[1]: kubelet.service: Consumed 120ms CPU time, 96.6M memory peak. Mar 17 17:51:52.536664 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:51:52.536927 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:51:52.540075 sudo[2293]: pam_unix(sudo:session): session closed for user root Mar 17 17:51:52.544356 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:51:52.544601 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:51:52.561811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:51:52.584192 augenrules[2315]: No rules Mar 17 17:51:52.584710 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:51:52.584931 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:51:52.586168 sudo[2292]: pam_unix(sudo:session): session closed for user root Mar 17 17:51:52.657003 sshd[2278]: Connection closed by 10.200.16.10 port 38104 Mar 17 17:51:52.657748 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:52.660609 systemd-logind[1736]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:51:52.662240 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:38104.service: Deactivated successfully. Mar 17 17:51:52.664358 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:51:52.665338 systemd-logind[1736]: Removed session 8. Mar 17 17:51:52.769144 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:38106.service - OpenSSH per-connection server daemon (10.200.16.10:38106). Mar 17 17:51:53.259863 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 38106 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:51:53.261124 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:53.265188 systemd-logind[1736]: New session 9 of user core. Mar 17 17:51:53.272401 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:51:53.534240 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:51:53.534524 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:51:54.790574 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:51:54.790641 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:51:55.478932 dockerd[2345]: time="2025-03-17T17:51:55.478641111Z" level=info msg="Starting up" Mar 17 17:51:55.849567 dockerd[2345]: time="2025-03-17T17:51:55.849181083Z" level=info msg="Loading containers: start." Mar 17 17:51:56.065294 kernel: Initializing XFRM netlink socket Mar 17 17:51:56.174427 systemd-networkd[1515]: docker0: Link UP Mar 17 17:51:56.219119 dockerd[2345]: time="2025-03-17T17:51:56.218512214Z" level=info msg="Loading containers: done." Mar 17 17:51:56.240786 dockerd[2345]: time="2025-03-17T17:51:56.240742571Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:51:56.241130 dockerd[2345]: time="2025-03-17T17:51:56.241098891Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:51:56.241369 dockerd[2345]: time="2025-03-17T17:51:56.241342492Z" level=info msg="Daemon has completed initialization" Mar 17 17:51:56.295801 dockerd[2345]: time="2025-03-17T17:51:56.295747862Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:51:56.295940 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:51:57.944139 containerd[1770]: time="2025-03-17T17:51:57.944094987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:51:58.779031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685559384.mount: Deactivated successfully. Mar 17 17:52:00.724276 containerd[1770]: time="2025-03-17T17:52:00.722501408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:00.728240 containerd[1770]: time="2025-03-17T17:52:00.728201219Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793524" Mar 17 17:52:00.732916 containerd[1770]: time="2025-03-17T17:52:00.732880508Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:00.737841 containerd[1770]: time="2025-03-17T17:52:00.737790638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:00.739137 containerd[1770]: time="2025-03-17T17:52:00.739099800Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.794960933s" Mar 17 17:52:00.739137 containerd[1770]: time="2025-03-17T17:52:00.739136680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:52:00.763055 containerd[1770]: time="2025-03-17T17:52:00.763009686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:52:02.646530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:52:02.653728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:03.042121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:03.045790 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:03.090355 kubelet[2603]: E0317 17:52:03.090291 2603 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:03.092952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:03.093109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:03.094368 systemd[1]: kubelet.service: Consumed 125ms CPU time, 96.8M memory peak. Mar 17 17:52:03.641280 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 17:52:03.674324 containerd[1770]: time="2025-03-17T17:52:03.673870199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:03.677580 containerd[1770]: time="2025-03-17T17:52:03.677521126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861167" Mar 17 17:52:03.682881 containerd[1770]: time="2025-03-17T17:52:03.682826936Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:03.690466 containerd[1770]: time="2025-03-17T17:52:03.690414031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:03.692111 containerd[1770]: time="2025-03-17T17:52:03.691926554Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 2.928871108s" Mar 17 17:52:03.692111 containerd[1770]: time="2025-03-17T17:52:03.692003794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:52:03.713444 containerd[1770]: time="2025-03-17T17:52:03.713186995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:52:04.695843 update_engine[1742]: I20250317 17:52:04.695235 1742 update_attempter.cc:509] Updating boot flags... Mar 17 17:52:04.750325 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2635) Mar 17 17:52:04.884770 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2635) Mar 17 17:52:05.254750 containerd[1770]: time="2025-03-17T17:52:05.254690637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:05.257618 containerd[1770]: time="2025-03-17T17:52:05.257570402Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264636" Mar 17 17:52:05.262774 containerd[1770]: time="2025-03-17T17:52:05.262713972Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:05.271279 containerd[1770]: time="2025-03-17T17:52:05.271210308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:05.272349 containerd[1770]: time="2025-03-17T17:52:05.272197550Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.558970555s" Mar 17 17:52:05.272349 containerd[1770]: time="2025-03-17T17:52:05.272234150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:52:05.291004 containerd[1770]: time="2025-03-17T17:52:05.290892266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:52:06.402277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513251205.mount: Deactivated successfully. Mar 17 17:52:06.703909 containerd[1770]: time="2025-03-17T17:52:06.703789714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:06.707632 containerd[1770]: time="2025-03-17T17:52:06.707435200Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 17 17:52:06.713196 containerd[1770]: time="2025-03-17T17:52:06.713137170Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:06.720181 containerd[1770]: time="2025-03-17T17:52:06.720117662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:06.720839 containerd[1770]: time="2025-03-17T17:52:06.720696863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.429766917s" Mar 17 17:52:06.720839 containerd[1770]: time="2025-03-17T17:52:06.720737383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:52:06.740562 containerd[1770]: time="2025-03-17T17:52:06.740521096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:52:07.408741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844468401.mount: Deactivated successfully. Mar 17 17:52:09.089296 containerd[1770]: time="2025-03-17T17:52:09.088845766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.092849 containerd[1770]: time="2025-03-17T17:52:09.092613573Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 17 17:52:09.095797 containerd[1770]: time="2025-03-17T17:52:09.095748299Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.102393 containerd[1770]: time="2025-03-17T17:52:09.102326991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.103806 containerd[1770]: time="2025-03-17T17:52:09.103485113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.362923257s" Mar 17 17:52:09.103806 containerd[1770]: time="2025-03-17T17:52:09.103520353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:52:09.123406 containerd[1770]: time="2025-03-17T17:52:09.123362270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:52:09.833198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343765539.mount: Deactivated successfully. Mar 17 17:52:09.866501 containerd[1770]: time="2025-03-17T17:52:09.866442040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.872051 containerd[1770]: time="2025-03-17T17:52:09.871989170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Mar 17 17:52:09.876861 containerd[1770]: time="2025-03-17T17:52:09.876808659Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.883745 containerd[1770]: time="2025-03-17T17:52:09.883691712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:09.884809 containerd[1770]: time="2025-03-17T17:52:09.884398273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 760.995843ms" Mar 17 17:52:09.884809 containerd[1770]: time="2025-03-17T17:52:09.884430233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:52:09.904179 containerd[1770]: time="2025-03-17T17:52:09.904125149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:52:10.647514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767324808.mount: Deactivated successfully. Mar 17 17:52:13.146507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:52:13.156298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:13.266061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:13.270842 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:13.313865 kubelet[2863]: E0317 17:52:13.313709 2863 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:13.316613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:13.316754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:13.317033 systemd[1]: kubelet.service: Consumed 121ms CPU time, 95.7M memory peak. Mar 17 17:52:15.052916 containerd[1770]: time="2025-03-17T17:52:15.052589883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:15.056660 containerd[1770]: time="2025-03-17T17:52:15.056595251Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Mar 17 17:52:15.063461 containerd[1770]: time="2025-03-17T17:52:15.063416183Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:15.072174 containerd[1770]: time="2025-03-17T17:52:15.072132719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:15.073321 containerd[1770]: time="2025-03-17T17:52:15.073281801Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.169120972s" Mar 17 17:52:15.073321 containerd[1770]: time="2025-03-17T17:52:15.073319281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:52:22.185917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:22.186601 systemd[1]: kubelet.service: Consumed 121ms CPU time, 95.7M memory peak. Mar 17 17:52:22.197490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:22.223345 systemd[1]: Reload requested from client PID 2937 ('systemctl') (unit session-9.scope)... Mar 17 17:52:22.223360 systemd[1]: Reloading... Mar 17 17:52:22.336299 zram_generator::config[2984]: No configuration found. Mar 17 17:52:22.452276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:22.570801 systemd[1]: Reloading finished in 347 ms. Mar 17 17:52:22.616373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:22.620494 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:22.623529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:22.624635 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:22.624893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:22.624950 systemd[1]: kubelet.service: Consumed 79ms CPU time, 83.5M memory peak. Mar 17 17:52:22.626655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:22.730370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:22.740502 (kubelet)[3054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:22.783438 kubelet[3054]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:22.783438 kubelet[3054]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:22.783438 kubelet[3054]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:22.783786 kubelet[3054]: I0317 17:52:22.783489 3054 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:23.165664 kubelet[3054]: I0317 17:52:23.165625 3054 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:52:23.165664 kubelet[3054]: I0317 17:52:23.165656 3054 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:23.165893 kubelet[3054]: I0317 17:52:23.165874 3054 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:52:23.177442 kubelet[3054]: E0317 17:52:23.177411 3054 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.178216 kubelet[3054]: I0317 17:52:23.178084 3054 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:23.188938 kubelet[3054]: I0317 17:52:23.188165 3054 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:23.188938 kubelet[3054]: I0317 17:52:23.188396 3054 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:23.188938 kubelet[3054]: I0317 17:52:23.188423 3054 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-3f2b416a0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:52:23.188938 kubelet[3054]: I0317 17:52:23.188681 3054 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:23.189171 kubelet[3054]: I0317 17:52:23.188690 3054 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:52:23.189171 kubelet[3054]: I0317 17:52:23.188814 3054 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:23.191632 kubelet[3054]: I0317 17:52:23.191598 3054 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:52:23.191632 kubelet[3054]: I0317 17:52:23.191632 3054 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:23.191741 kubelet[3054]: I0317 17:52:23.191672 3054 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:52:23.191741 kubelet[3054]: I0317 17:52:23.191699 3054 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:23.194108 kubelet[3054]: I0317 17:52:23.194075 3054 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:23.194291 kubelet[3054]: I0317 17:52:23.194274 3054 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:23.194342 kubelet[3054]: W0317 17:52:23.194325 3054 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:52:23.194898 kubelet[3054]: I0317 17:52:23.194868 3054 server.go:1264] "Started kubelet" Mar 17 17:52:23.199011 kubelet[3054]: I0317 17:52:23.198955 3054 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:23.199905 kubelet[3054]: I0317 17:52:23.199880 3054 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:52:23.200763 kubelet[3054]: W0317 17:52:23.200718 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-3f2b416a0a&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.200883 kubelet[3054]: E0317 17:52:23.200869 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-3f2b416a0a&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.201077 kubelet[3054]: E0317 17:52:23.200983 3054 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-3f2b416a0a.182da88a63c1c0e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-3f2b416a0a,UID:ci-4230.1.0-a-3f2b416a0a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-3f2b416a0a,},FirstTimestamp:2025-03-17 17:52:23.194845417 +0000 UTC m=+0.451452328,LastTimestamp:2025-03-17 17:52:23.194845417 +0000 UTC m=+0.451452328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-3f2b416a0a,}" Mar 17 17:52:23.201545 kubelet[3054]: W0317 17:52:23.201311 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.201545 kubelet[3054]: E0317 17:52:23.201352 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.201545 kubelet[3054]: I0317 17:52:23.201500 3054 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:23.201813 kubelet[3054]: I0317 17:52:23.201792 3054 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:23.203344 kubelet[3054]: I0317 17:52:23.203321 3054 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:23.205375 kubelet[3054]: E0317 17:52:23.204618 3054 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-3f2b416a0a\" not found" Mar 17 17:52:23.205375 kubelet[3054]: I0317 17:52:23.204664 3054 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:52:23.205375 kubelet[3054]: I0317 17:52:23.204741 3054 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:52:23.206027 kubelet[3054]: I0317 17:52:23.206013 3054 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:23.206571 kubelet[3054]: E0317 17:52:23.206551 3054 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:52:23.207101 kubelet[3054]: W0317 17:52:23.207063 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.207199 kubelet[3054]: E0317 17:52:23.207186 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.208322 kubelet[3054]: I0317 17:52:23.208303 3054 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:23.208516 kubelet[3054]: I0317 17:52:23.208498 3054 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:23.208870 kubelet[3054]: E0317 17:52:23.208846 3054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-3f2b416a0a?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Mar 17 17:52:23.209836 kubelet[3054]: I0317 17:52:23.209819 3054 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:23.217177 kubelet[3054]: I0317 17:52:23.217125 3054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:23.218134 kubelet[3054]: I0317 17:52:23.218105 3054 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:23.218134 kubelet[3054]: I0317 17:52:23.218139 3054 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:23.218239 kubelet[3054]: I0317 17:52:23.218162 3054 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:52:23.218239 kubelet[3054]: E0317 17:52:23.218200 3054 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:23.224396 kubelet[3054]: W0317 17:52:23.224353 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.224516 kubelet[3054]: E0317 17:52:23.224504 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:23.232839 kubelet[3054]: I0317 17:52:23.232800 3054 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:23.232839 kubelet[3054]: I0317 17:52:23.232833 3054 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:23.232954 kubelet[3054]: I0317 17:52:23.232852 3054 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:23.240523 kubelet[3054]: I0317 17:52:23.240499 3054 policy_none.go:49] "None policy: Start" Mar 17 17:52:23.241281 kubelet[3054]: I0317 17:52:23.241207 3054 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:23.241281 kubelet[3054]: I0317 17:52:23.241232 3054 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:23.254313 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:52:23.267528 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:52:23.271052 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:52:23.276455 kubelet[3054]: I0317 17:52:23.276001 3054 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:23.276455 kubelet[3054]: I0317 17:52:23.276198 3054 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:23.276455 kubelet[3054]: I0317 17:52:23.276326 3054 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:23.280386 kubelet[3054]: E0317 17:52:23.279572 3054 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-3f2b416a0a\" not found" Mar 17 17:52:23.306566 kubelet[3054]: I0317 17:52:23.306516 3054 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.307089 kubelet[3054]: E0317 17:52:23.307065 3054 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.319210 kubelet[3054]: I0317 17:52:23.319173 3054 topology_manager.go:215] "Topology Admit Handler" podUID="41caf54de34271fb8fa175613d36b8bc" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.320789 kubelet[3054]: I0317 17:52:23.320761 3054 topology_manager.go:215] "Topology Admit Handler" podUID="cba1f02edbb4cd45a42236db74a55704" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.322236 kubelet[3054]: I0317 17:52:23.322213 3054 topology_manager.go:215] "Topology Admit Handler" podUID="092ee511e5996c8ed84b8b4363a03847" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.329231 systemd[1]: Created slice kubepods-burstable-pod41caf54de34271fb8fa175613d36b8bc.slice - libcontainer container kubepods-burstable-pod41caf54de34271fb8fa175613d36b8bc.slice. Mar 17 17:52:23.352974 systemd[1]: Created slice kubepods-burstable-podcba1f02edbb4cd45a42236db74a55704.slice - libcontainer container kubepods-burstable-podcba1f02edbb4cd45a42236db74a55704.slice. Mar 17 17:52:23.368912 systemd[1]: Created slice kubepods-burstable-pod092ee511e5996c8ed84b8b4363a03847.slice - libcontainer container kubepods-burstable-pod092ee511e5996c8ed84b8b4363a03847.slice. Mar 17 17:52:23.406809 kubelet[3054]: I0317 17:52:23.406777 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407152 kubelet[3054]: I0317 17:52:23.406966 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407152 kubelet[3054]: I0317 17:52:23.406994 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407152 kubelet[3054]: I0317 17:52:23.407012 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407152 kubelet[3054]: I0317 17:52:23.407037 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407152 kubelet[3054]: I0317 17:52:23.407054 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407322 kubelet[3054]: I0317 17:52:23.407070 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407322 kubelet[3054]: I0317 17:52:23.407088 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.407322 kubelet[3054]: I0317 17:52:23.407106 3054 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092ee511e5996c8ed84b8b4363a03847-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-3f2b416a0a\" (UID: \"092ee511e5996c8ed84b8b4363a03847\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.410118 kubelet[3054]: E0317 17:52:23.410081 3054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-3f2b416a0a?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Mar 17 17:52:23.509831 kubelet[3054]: I0317 17:52:23.508964 3054 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.509831 kubelet[3054]: E0317 17:52:23.509279 3054 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.651955 containerd[1770]: time="2025-03-17T17:52:23.651679709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-3f2b416a0a,Uid:41caf54de34271fb8fa175613d36b8bc,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:23.667197 containerd[1770]: time="2025-03-17T17:52:23.667151419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-3f2b416a0a,Uid:cba1f02edbb4cd45a42236db74a55704,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:23.672147 containerd[1770]: time="2025-03-17T17:52:23.671883108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-3f2b416a0a,Uid:092ee511e5996c8ed84b8b4363a03847,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:23.811045 kubelet[3054]: E0317 17:52:23.810757 3054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-3f2b416a0a?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Mar 17 17:52:23.911972 kubelet[3054]: I0317 17:52:23.911644 3054 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:23.911972 kubelet[3054]: E0317 17:52:23.911940 3054 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:24.129656 kubelet[3054]: W0317 17:52:24.129351 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.129796 kubelet[3054]: E0317 17:52:24.129771 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.185033 kubelet[3054]: W0317 17:52:24.184967 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.185033 kubelet[3054]: E0317 17:52:24.185037 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.374161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192107311.mount: Deactivated successfully. Mar 17 17:52:24.409951 containerd[1770]: time="2025-03-17T17:52:24.409617100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:24.424227 containerd[1770]: time="2025-03-17T17:52:24.424182768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:52:24.429947 containerd[1770]: time="2025-03-17T17:52:24.429911379Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:24.437292 containerd[1770]: time="2025-03-17T17:52:24.437087192Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:24.447208 containerd[1770]: time="2025-03-17T17:52:24.446975731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:24.452565 containerd[1770]: time="2025-03-17T17:52:24.451803700Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:24.455216 containerd[1770]: time="2025-03-17T17:52:24.455175306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:24.456117 containerd[1770]: time="2025-03-17T17:52:24.456085588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 804.317038ms" Mar 17 17:52:24.457941 containerd[1770]: time="2025-03-17T17:52:24.457880831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:24.463092 containerd[1770]: time="2025-03-17T17:52:24.462923281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 795.694102ms" Mar 17 17:52:24.485184 containerd[1770]: time="2025-03-17T17:52:24.485124643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 813.147935ms" Mar 17 17:52:24.554723 kubelet[3054]: W0317 17:52:24.554658 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.554723 kubelet[3054]: E0317 17:52:24.554727 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.611332 kubelet[3054]: E0317 17:52:24.611285 3054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-3f2b416a0a?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Mar 17 17:52:24.714854 kubelet[3054]: I0317 17:52:24.714320 3054 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:24.714854 kubelet[3054]: E0317 17:52:24.714699 3054 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:24.741623 kubelet[3054]: W0317 17:52:24.741536 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-3f2b416a0a&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:24.741623 kubelet[3054]: E0317 17:52:24.741602 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-3f2b416a0a&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:25.194928 kubelet[3054]: E0317 17:52:25.194887 3054 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:25.838027 containerd[1770]: time="2025-03-17T17:52:25.837408115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:25.838027 containerd[1770]: time="2025-03-17T17:52:25.837470755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:25.838027 containerd[1770]: time="2025-03-17T17:52:25.837483315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.838027 containerd[1770]: time="2025-03-17T17:52:25.837557676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.850627 containerd[1770]: time="2025-03-17T17:52:25.849596898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:25.850734 containerd[1770]: time="2025-03-17T17:52:25.850432180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:25.850767 containerd[1770]: time="2025-03-17T17:52:25.850711340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.851615 containerd[1770]: time="2025-03-17T17:52:25.851561702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.852010 containerd[1770]: time="2025-03-17T17:52:25.851792702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:25.852010 containerd[1770]: time="2025-03-17T17:52:25.851843703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:25.852010 containerd[1770]: time="2025-03-17T17:52:25.851858383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.852010 containerd[1770]: time="2025-03-17T17:52:25.851919263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:25.886425 systemd[1]: Started cri-containerd-8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5.scope - libcontainer container 8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5. Mar 17 17:52:25.888706 systemd[1]: Started cri-containerd-f67f73f000bba48496541bd22f3daab69330b46efb6c3ead5a845aaada342d00.scope - libcontainer container f67f73f000bba48496541bd22f3daab69330b46efb6c3ead5a845aaada342d00. Mar 17 17:52:25.892531 systemd[1]: Started cri-containerd-bbfcfe2592d13810e5fe177a43cf923f250c6c96c1ea1a83848fa08799d362ba.scope - libcontainer container bbfcfe2592d13810e5fe177a43cf923f250c6c96c1ea1a83848fa08799d362ba. Mar 17 17:52:25.906861 kubelet[3054]: W0317 17:52:25.906709 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:25.906861 kubelet[3054]: E0317 17:52:25.906865 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:25.941022 containerd[1770]: time="2025-03-17T17:52:25.940628430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-3f2b416a0a,Uid:cba1f02edbb4cd45a42236db74a55704,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5\"" Mar 17 17:52:25.943670 containerd[1770]: time="2025-03-17T17:52:25.943630356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-3f2b416a0a,Uid:41caf54de34271fb8fa175613d36b8bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f67f73f000bba48496541bd22f3daab69330b46efb6c3ead5a845aaada342d00\"" Mar 17 17:52:25.943898 containerd[1770]: time="2025-03-17T17:52:25.943872036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-3f2b416a0a,Uid:092ee511e5996c8ed84b8b4363a03847,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbfcfe2592d13810e5fe177a43cf923f250c6c96c1ea1a83848fa08799d362ba\"" Mar 17 17:52:25.950037 containerd[1770]: time="2025-03-17T17:52:25.949522287Z" level=info msg="CreateContainer within sandbox \"8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:52:25.950510 containerd[1770]: time="2025-03-17T17:52:25.950466009Z" level=info msg="CreateContainer within sandbox \"bbfcfe2592d13810e5fe177a43cf923f250c6c96c1ea1a83848fa08799d362ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:52:25.951030 containerd[1770]: time="2025-03-17T17:52:25.950995130Z" level=info msg="CreateContainer within sandbox \"f67f73f000bba48496541bd22f3daab69330b46efb6c3ead5a845aaada342d00\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:52:26.025294 kubelet[3054]: W0317 17:52:26.025234 3054 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:26.025294 kubelet[3054]: E0317 17:52:26.025299 3054 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Mar 17 17:52:26.047509 containerd[1770]: time="2025-03-17T17:52:26.047447312Z" level=info msg="CreateContainer within sandbox \"8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ce80fffda4075ce1746856e94eed208262865f749a4d9220ac92ff6a70a69af\"" Mar 17 17:52:26.048182 containerd[1770]: time="2025-03-17T17:52:26.048152433Z" level=info msg="StartContainer for \"0ce80fffda4075ce1746856e94eed208262865f749a4d9220ac92ff6a70a69af\"" Mar 17 17:52:26.071503 systemd[1]: Started cri-containerd-0ce80fffda4075ce1746856e94eed208262865f749a4d9220ac92ff6a70a69af.scope - libcontainer container 0ce80fffda4075ce1746856e94eed208262865f749a4d9220ac92ff6a70a69af. Mar 17 17:52:26.073297 containerd[1770]: time="2025-03-17T17:52:26.073234800Z" level=info msg="CreateContainer within sandbox \"f67f73f000bba48496541bd22f3daab69330b46efb6c3ead5a845aaada342d00\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00dde573d0abd8efd4041ec492709cbc5b9535a7c2f35ded15ac89df0478e12a\"" Mar 17 17:52:26.073944 containerd[1770]: time="2025-03-17T17:52:26.073901042Z" level=info msg="StartContainer for \"00dde573d0abd8efd4041ec492709cbc5b9535a7c2f35ded15ac89df0478e12a\"" Mar 17 17:52:26.079282 containerd[1770]: time="2025-03-17T17:52:26.077866529Z" level=info msg="CreateContainer within sandbox \"bbfcfe2592d13810e5fe177a43cf923f250c6c96c1ea1a83848fa08799d362ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62c378c00364687d800d86e86d6f92019030ca8784389d57043a9d83c8dbf4fe\"" Mar 17 17:52:26.079874 containerd[1770]: time="2025-03-17T17:52:26.079845253Z" level=info msg="StartContainer for \"62c378c00364687d800d86e86d6f92019030ca8784389d57043a9d83c8dbf4fe\"" Mar 17 17:52:26.111420 systemd[1]: Started cri-containerd-00dde573d0abd8efd4041ec492709cbc5b9535a7c2f35ded15ac89df0478e12a.scope - libcontainer container 00dde573d0abd8efd4041ec492709cbc5b9535a7c2f35ded15ac89df0478e12a. Mar 17 17:52:26.123026 systemd[1]: Started cri-containerd-62c378c00364687d800d86e86d6f92019030ca8784389d57043a9d83c8dbf4fe.scope - libcontainer container 62c378c00364687d800d86e86d6f92019030ca8784389d57043a9d83c8dbf4fe. Mar 17 17:52:26.135356 containerd[1770]: time="2025-03-17T17:52:26.134060115Z" level=info msg="StartContainer for \"0ce80fffda4075ce1746856e94eed208262865f749a4d9220ac92ff6a70a69af\" returns successfully" Mar 17 17:52:26.174323 containerd[1770]: time="2025-03-17T17:52:26.173676630Z" level=info msg="StartContainer for \"00dde573d0abd8efd4041ec492709cbc5b9535a7c2f35ded15ac89df0478e12a\" returns successfully" Mar 17 17:52:26.193321 containerd[1770]: time="2025-03-17T17:52:26.193271467Z" level=info msg="StartContainer for \"62c378c00364687d800d86e86d6f92019030ca8784389d57043a9d83c8dbf4fe\" returns successfully" Mar 17 17:52:26.212192 kubelet[3054]: E0317 17:52:26.212128 3054 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-3f2b416a0a?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="3.2s" Mar 17 17:52:26.317005 kubelet[3054]: I0317 17:52:26.316968 3054 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:26.854292 systemd[1]: run-containerd-runc-k8s.io-8ddd556129877ac8c48a208679e3a32c12b217a82da8c7658a6960ea122d54e5-runc.d4BQdM.mount: Deactivated successfully. Mar 17 17:52:28.889798 kubelet[3054]: I0317 17:52:28.889731 3054 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:29.203576 kubelet[3054]: I0317 17:52:29.203371 3054 apiserver.go:52] "Watching apiserver" Mar 17 17:52:29.305464 kubelet[3054]: I0317 17:52:29.305427 3054 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:52:31.201122 systemd[1]: Reload requested from client PID 3329 ('systemctl') (unit session-9.scope)... Mar 17 17:52:31.201455 systemd[1]: Reloading... Mar 17 17:52:31.312323 zram_generator::config[3376]: No configuration found. Mar 17 17:52:31.426737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:31.553717 systemd[1]: Reloading finished in 351 ms. Mar 17 17:52:31.580876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:31.599492 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:31.599911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:31.600048 systemd[1]: kubelet.service: Consumed 800ms CPU time, 112.3M memory peak. Mar 17 17:52:31.606546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:31.722178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:31.725915 (kubelet)[3440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:31.783927 kubelet[3440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:31.783927 kubelet[3440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:31.783927 kubelet[3440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:31.784292 kubelet[3440]: I0317 17:52:31.783956 3440 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:31.790205 kubelet[3440]: I0317 17:52:31.790098 3440 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:52:31.790205 kubelet[3440]: I0317 17:52:31.790122 3440 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:31.790448 kubelet[3440]: I0317 17:52:31.790324 3440 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:52:31.791898 kubelet[3440]: I0317 17:52:31.791634 3440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:52:31.795366 kubelet[3440]: I0317 17:52:31.795176 3440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:31.804940 kubelet[3440]: I0317 17:52:31.804772 3440 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:31.805062 kubelet[3440]: I0317 17:52:31.804976 3440 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:31.805963 kubelet[3440]: I0317 17:52:31.805002 3440 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-3f2b416a0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:52:31.805963 kubelet[3440]: I0317 17:52:31.805177 3440 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:31.805963 kubelet[3440]: I0317 17:52:31.805186 3440 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:52:31.805963 kubelet[3440]: I0317 17:52:31.805300 3440 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:31.808569 kubelet[3440]: I0317 17:52:31.806571 3440 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:52:31.808569 kubelet[3440]: I0317 17:52:31.806598 3440 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:31.808569 kubelet[3440]: I0317 17:52:31.806636 3440 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:52:31.808569 kubelet[3440]: I0317 17:52:31.806655 3440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:31.809804 kubelet[3440]: I0317 17:52:31.809657 3440 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:31.811473 kubelet[3440]: I0317 17:52:31.810096 3440 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:31.811473 kubelet[3440]: I0317 17:52:31.810501 3440 server.go:1264] "Started kubelet" Mar 17 17:52:31.813409 kubelet[3440]: I0317 17:52:31.813380 3440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:31.813552 kubelet[3440]: I0317 17:52:31.813385 3440 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:31.815330 kubelet[3440]: I0317 17:52:31.814645 3440 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:52:31.817909 kubelet[3440]: I0317 17:52:31.817221 3440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:31.818698 kubelet[3440]: I0317 17:52:31.818365 3440 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:31.819399 kubelet[3440]: I0317 17:52:31.819181 3440 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:52:31.819986 kubelet[3440]: I0317 17:52:31.819198 3440 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:52:31.820205 kubelet[3440]: I0317 17:52:31.820190 3440 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:31.823852 kubelet[3440]: I0317 17:52:31.822550 3440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:31.823852 kubelet[3440]: I0317 17:52:31.823666 3440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:31.823852 kubelet[3440]: I0317 17:52:31.823699 3440 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:31.823852 kubelet[3440]: I0317 17:52:31.823715 3440 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:52:31.823852 kubelet[3440]: E0317 17:52:31.823773 3440 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:31.826049 kubelet[3440]: I0317 17:52:31.824484 3440 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:31.826303 kubelet[3440]: I0317 17:52:31.826279 3440 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:31.831471 kubelet[3440]: I0317 17:52:31.831451 3440 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:31.910076 kubelet[3440]: I0317 17:52:31.910041 3440 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:31.910204 kubelet[3440]: I0317 17:52:31.910176 3440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:31.910204 kubelet[3440]: I0317 17:52:31.910198 3440 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:31.910468 kubelet[3440]: I0317 17:52:31.910444 3440 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:52:31.910564 kubelet[3440]: I0317 17:52:31.910464 3440 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:52:31.910564 kubelet[3440]: I0317 17:52:31.910501 3440 policy_none.go:49] "None policy: Start" Mar 17 17:52:31.911668 kubelet[3440]: I0317 17:52:31.911646 3440 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:31.911728 kubelet[3440]: I0317 17:52:31.911687 3440 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:31.911830 kubelet[3440]: I0317 17:52:31.911809 3440 state_mem.go:75] "Updated machine memory state" Mar 17 17:52:31.916452 kubelet[3440]: I0317 17:52:31.916426 3440 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:31.916693 kubelet[3440]: I0317 17:52:31.916606 3440 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:31.916732 kubelet[3440]: I0317 17:52:31.916707 3440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:31.923875 kubelet[3440]: I0317 17:52:31.923828 3440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:31.925469 kubelet[3440]: I0317 17:52:31.924082 3440 topology_manager.go:215] "Topology Admit Handler" podUID="41caf54de34271fb8fa175613d36b8bc" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:31.925469 kubelet[3440]: I0317 17:52:31.924173 3440 topology_manager.go:215] "Topology Admit Handler" podUID="cba1f02edbb4cd45a42236db74a55704" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:31.925469 kubelet[3440]: I0317 17:52:31.924208 3440 topology_manager.go:215] "Topology Admit Handler" podUID="092ee511e5996c8ed84b8b4363a03847" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:31.950896 kubelet[3440]: W0317 17:52:31.950868 3440 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:31.951690 kubelet[3440]: W0317 17:52:31.951236 3440 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:31.951690 kubelet[3440]: W0317 17:52:31.951298 3440 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:31.951690 kubelet[3440]: I0317 17:52:31.951378 3440 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:31.951690 kubelet[3440]: I0317 17:52:31.951435 3440 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122114 kubelet[3440]: I0317 17:52:32.121984 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122114 kubelet[3440]: I0317 17:52:32.122037 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092ee511e5996c8ed84b8b4363a03847-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-3f2b416a0a\" (UID: \"092ee511e5996c8ed84b8b4363a03847\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122114 kubelet[3440]: I0317 17:52:32.122060 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122114 kubelet[3440]: I0317 17:52:32.122078 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122315 kubelet[3440]: I0317 17:52:32.122124 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122315 kubelet[3440]: I0317 17:52:32.122142 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122315 kubelet[3440]: I0317 17:52:32.122159 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41caf54de34271fb8fa175613d36b8bc-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" (UID: \"41caf54de34271fb8fa175613d36b8bc\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122315 kubelet[3440]: I0317 17:52:32.122187 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.122315 kubelet[3440]: I0317 17:52:32.122203 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cba1f02edbb4cd45a42236db74a55704-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-3f2b416a0a\" (UID: \"cba1f02edbb4cd45a42236db74a55704\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.211739 sudo[3470]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:52:32.212001 sudo[3470]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:52:32.654391 sudo[3470]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:32.809763 kubelet[3440]: I0317 17:52:32.809508 3440 apiserver.go:52] "Watching apiserver" Mar 17 17:52:32.820971 kubelet[3440]: I0317 17:52:32.820917 3440 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:52:32.905739 kubelet[3440]: W0317 17:52:32.904686 3440 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:32.905739 kubelet[3440]: E0317 17:52:32.904746 3440 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-a-3f2b416a0a\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" Mar 17 17:52:32.929054 kubelet[3440]: I0317 17:52:32.928975 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-a-3f2b416a0a" podStartSLOduration=1.928957255 podStartE2EDuration="1.928957255s" podCreationTimestamp="2025-03-17 17:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:32.917990953 +0000 UTC m=+1.189142451" watchObservedRunningTime="2025-03-17 17:52:32.928957255 +0000 UTC m=+1.200108753" Mar 17 17:52:32.929697 kubelet[3440]: I0317 17:52:32.929668 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-a-3f2b416a0a" podStartSLOduration=1.929658817 podStartE2EDuration="1.929658817s" podCreationTimestamp="2025-03-17 17:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:32.928042413 +0000 UTC m=+1.199193911" watchObservedRunningTime="2025-03-17 17:52:32.929658817 +0000 UTC m=+1.200810315" Mar 17 17:52:32.953628 kubelet[3440]: I0317 17:52:32.953294 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-3f2b416a0a" podStartSLOduration=1.9532762639999999 podStartE2EDuration="1.953276264s" podCreationTimestamp="2025-03-17 17:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:32.939893077 +0000 UTC m=+1.211044575" watchObservedRunningTime="2025-03-17 17:52:32.953276264 +0000 UTC m=+1.224427722" Mar 17 17:52:34.456862 sudo[2327]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:34.539290 sshd[2326]: Connection closed by 10.200.16.10 port 38106 Mar 17 17:52:34.539828 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:34.542455 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:38106.service: Deactivated successfully. Mar 17 17:52:34.545862 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:52:34.546207 systemd[1]: session-9.scope: Consumed 9.116s CPU time, 291.3M memory peak. Mar 17 17:52:34.548174 systemd-logind[1736]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:52:34.549726 systemd-logind[1736]: Removed session 9. Mar 17 17:52:43.986126 kubelet[3440]: I0317 17:52:43.986042 3440 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:52:43.986662 containerd[1770]: time="2025-03-17T17:52:43.986582090Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:52:43.987107 kubelet[3440]: I0317 17:52:43.986801 3440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:52:44.826904 kubelet[3440]: I0317 17:52:44.826823 3440 topology_manager.go:215] "Topology Admit Handler" podUID="ee1064ad-fa08-44fb-8c34-af9ed459dc1a" podNamespace="kube-system" podName="kube-proxy-q6dfs" Mar 17 17:52:44.839466 systemd[1]: Created slice kubepods-besteffort-podee1064ad_fa08_44fb_8c34_af9ed459dc1a.slice - libcontainer container kubepods-besteffort-podee1064ad_fa08_44fb_8c34_af9ed459dc1a.slice. Mar 17 17:52:44.844861 kubelet[3440]: I0317 17:52:44.844021 3440 topology_manager.go:215] "Topology Admit Handler" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" podNamespace="kube-system" podName="cilium-v6bv7" Mar 17 17:52:44.857174 systemd[1]: Created slice kubepods-burstable-podf6540c1c_eef8_4eb8_ab0d_b36d01e78ac5.slice - libcontainer container kubepods-burstable-podf6540c1c_eef8_4eb8_ab0d_b36d01e78ac5.slice. Mar 17 17:52:44.893210 kubelet[3440]: I0317 17:52:44.893153 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-cgroup\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893210 kubelet[3440]: I0317 17:52:44.893194 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-config-path\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893210 kubelet[3440]: I0317 17:52:44.893216 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-kernel\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893233 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee1064ad-fa08-44fb-8c34-af9ed459dc1a-xtables-lock\") pod \"kube-proxy-q6dfs\" (UID: \"ee1064ad-fa08-44fb-8c34-af9ed459dc1a\") " pod="kube-system/kube-proxy-q6dfs" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893264 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cni-path\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893284 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hubble-tls\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893298 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qnm\" (UniqueName: \"kubernetes.io/projected/ee1064ad-fa08-44fb-8c34-af9ed459dc1a-kube-api-access-j7qnm\") pod \"kube-proxy-q6dfs\" (UID: \"ee1064ad-fa08-44fb-8c34-af9ed459dc1a\") " pod="kube-system/kube-proxy-q6dfs" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893319 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-net\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893428 kubelet[3440]: I0317 17:52:44.893333 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-lib-modules\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893348 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhwqf\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-kube-api-access-dhwqf\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893363 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee1064ad-fa08-44fb-8c34-af9ed459dc1a-lib-modules\") pod \"kube-proxy-q6dfs\" (UID: \"ee1064ad-fa08-44fb-8c34-af9ed459dc1a\") " pod="kube-system/kube-proxy-q6dfs" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893379 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-bpf-maps\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893416 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hostproc\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893432 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee1064ad-fa08-44fb-8c34-af9ed459dc1a-kube-proxy\") pod \"kube-proxy-q6dfs\" (UID: \"ee1064ad-fa08-44fb-8c34-af9ed459dc1a\") " pod="kube-system/kube-proxy-q6dfs" Mar 17 17:52:44.893553 kubelet[3440]: I0317 17:52:44.893450 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-run\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893669 kubelet[3440]: I0317 17:52:44.893464 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-etc-cni-netd\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893669 kubelet[3440]: I0317 17:52:44.893479 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-xtables-lock\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:44.893669 kubelet[3440]: I0317 17:52:44.893493 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-clustermesh-secrets\") pod \"cilium-v6bv7\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " pod="kube-system/cilium-v6bv7" Mar 17 17:52:45.087691 kubelet[3440]: I0317 17:52:45.087374 3440 topology_manager.go:215] "Topology Admit Handler" podUID="a5d41034-9ac4-4f77-b122-2b32312efd8e" podNamespace="kube-system" podName="cilium-operator-599987898-fctkr" Mar 17 17:52:45.094987 kubelet[3440]: I0317 17:52:45.094730 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5d41034-9ac4-4f77-b122-2b32312efd8e-cilium-config-path\") pod \"cilium-operator-599987898-fctkr\" (UID: \"a5d41034-9ac4-4f77-b122-2b32312efd8e\") " pod="kube-system/cilium-operator-599987898-fctkr" Mar 17 17:52:45.094987 kubelet[3440]: I0317 17:52:45.094769 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzm5z\" (UniqueName: \"kubernetes.io/projected/a5d41034-9ac4-4f77-b122-2b32312efd8e-kube-api-access-vzm5z\") pod \"cilium-operator-599987898-fctkr\" (UID: \"a5d41034-9ac4-4f77-b122-2b32312efd8e\") " pod="kube-system/cilium-operator-599987898-fctkr" Mar 17 17:52:45.097571 systemd[1]: Created slice kubepods-besteffort-poda5d41034_9ac4_4f77_b122_2b32312efd8e.slice - libcontainer container kubepods-besteffort-poda5d41034_9ac4_4f77_b122_2b32312efd8e.slice. Mar 17 17:52:45.148223 containerd[1770]: time="2025-03-17T17:52:45.148179958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6dfs,Uid:ee1064ad-fa08-44fb-8c34-af9ed459dc1a,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:45.162242 containerd[1770]: time="2025-03-17T17:52:45.162051944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6bv7,Uid:f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:45.222750 containerd[1770]: time="2025-03-17T17:52:45.222540936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:45.222750 containerd[1770]: time="2025-03-17T17:52:45.222595576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:45.222750 containerd[1770]: time="2025-03-17T17:52:45.222617456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.222750 containerd[1770]: time="2025-03-17T17:52:45.222705336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.228535 containerd[1770]: time="2025-03-17T17:52:45.227698266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:45.228535 containerd[1770]: time="2025-03-17T17:52:45.228200666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:45.228535 containerd[1770]: time="2025-03-17T17:52:45.228234587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.228535 containerd[1770]: time="2025-03-17T17:52:45.228425147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.243410 systemd[1]: Started cri-containerd-460010da32977ac2a41ae63d475ca7bc84c3a66721081314c869edf9e2cce99e.scope - libcontainer container 460010da32977ac2a41ae63d475ca7bc84c3a66721081314c869edf9e2cce99e. Mar 17 17:52:45.247849 systemd[1]: Started cri-containerd-d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb.scope - libcontainer container d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb. Mar 17 17:52:45.275550 containerd[1770]: time="2025-03-17T17:52:45.275438274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6bv7,Uid:f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\"" Mar 17 17:52:45.276673 containerd[1770]: time="2025-03-17T17:52:45.276633916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6dfs,Uid:ee1064ad-fa08-44fb-8c34-af9ed459dc1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"460010da32977ac2a41ae63d475ca7bc84c3a66721081314c869edf9e2cce99e\"" Mar 17 17:52:45.280192 containerd[1770]: time="2025-03-17T17:52:45.280087082Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:52:45.281987 containerd[1770]: time="2025-03-17T17:52:45.281921606Z" level=info msg="CreateContainer within sandbox \"460010da32977ac2a41ae63d475ca7bc84c3a66721081314c869edf9e2cce99e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:52:45.333539 containerd[1770]: time="2025-03-17T17:52:45.333462781Z" level=info msg="CreateContainer within sandbox \"460010da32977ac2a41ae63d475ca7bc84c3a66721081314c869edf9e2cce99e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a6f75b2a963514b523cb12c5fb6018f52a80791e5baec8629abba89f65ad140\"" Mar 17 17:52:45.334079 containerd[1770]: time="2025-03-17T17:52:45.333975142Z" level=info msg="StartContainer for \"0a6f75b2a963514b523cb12c5fb6018f52a80791e5baec8629abba89f65ad140\"" Mar 17 17:52:45.357445 systemd[1]: Started cri-containerd-0a6f75b2a963514b523cb12c5fb6018f52a80791e5baec8629abba89f65ad140.scope - libcontainer container 0a6f75b2a963514b523cb12c5fb6018f52a80791e5baec8629abba89f65ad140. Mar 17 17:52:45.391533 containerd[1770]: time="2025-03-17T17:52:45.391485568Z" level=info msg="StartContainer for \"0a6f75b2a963514b523cb12c5fb6018f52a80791e5baec8629abba89f65ad140\" returns successfully" Mar 17 17:52:45.400834 containerd[1770]: time="2025-03-17T17:52:45.400697225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fctkr,Uid:a5d41034-9ac4-4f77-b122-2b32312efd8e,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:45.468682 containerd[1770]: time="2025-03-17T17:52:45.467685069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:45.468682 containerd[1770]: time="2025-03-17T17:52:45.467738309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:45.468682 containerd[1770]: time="2025-03-17T17:52:45.467752909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.468682 containerd[1770]: time="2025-03-17T17:52:45.467820430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:45.485431 systemd[1]: Started cri-containerd-fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f.scope - libcontainer container fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f. Mar 17 17:52:45.517482 containerd[1770]: time="2025-03-17T17:52:45.517388201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fctkr,Uid:a5d41034-9ac4-4f77-b122-2b32312efd8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\"" Mar 17 17:52:45.930136 kubelet[3440]: I0317 17:52:45.929950 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q6dfs" podStartSLOduration=1.929931364 podStartE2EDuration="1.929931364s" podCreationTimestamp="2025-03-17 17:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:45.929600723 +0000 UTC m=+14.200752221" watchObservedRunningTime="2025-03-17 17:52:45.929931364 +0000 UTC m=+14.201082862" Mar 17 17:52:49.924956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437483554.mount: Deactivated successfully. Mar 17 17:52:51.543520 containerd[1770]: time="2025-03-17T17:52:51.543463642Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:51.547714 containerd[1770]: time="2025-03-17T17:52:51.547528209Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:52:51.551768 containerd[1770]: time="2025-03-17T17:52:51.551712898Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:51.553637 containerd[1770]: time="2025-03-17T17:52:51.553501021Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.273375819s" Mar 17 17:52:51.553637 containerd[1770]: time="2025-03-17T17:52:51.553540421Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:52:51.555279 containerd[1770]: time="2025-03-17T17:52:51.555071464Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:52:51.563283 containerd[1770]: time="2025-03-17T17:52:51.563140880Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:52:51.601891 containerd[1770]: time="2025-03-17T17:52:51.601770475Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\"" Mar 17 17:52:51.602801 containerd[1770]: time="2025-03-17T17:52:51.602620917Z" level=info msg="StartContainer for \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\"" Mar 17 17:52:51.626704 systemd[1]: run-containerd-runc-k8s.io-978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b-runc.qxyWoO.mount: Deactivated successfully. Mar 17 17:52:51.634421 systemd[1]: Started cri-containerd-978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b.scope - libcontainer container 978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b. Mar 17 17:52:51.659908 containerd[1770]: time="2025-03-17T17:52:51.659862869Z" level=info msg="StartContainer for \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\" returns successfully" Mar 17 17:52:51.665389 systemd[1]: cri-containerd-978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b.scope: Deactivated successfully. Mar 17 17:52:52.588846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b-rootfs.mount: Deactivated successfully. Mar 17 17:52:53.474274 containerd[1770]: time="2025-03-17T17:52:53.474200890Z" level=info msg="shim disconnected" id=978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b namespace=k8s.io Mar 17 17:52:53.474274 containerd[1770]: time="2025-03-17T17:52:53.474268050Z" level=warning msg="cleaning up after shim disconnected" id=978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b namespace=k8s.io Mar 17 17:52:53.474274 containerd[1770]: time="2025-03-17T17:52:53.474278090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:53.489160 containerd[1770]: time="2025-03-17T17:52:53.488030997Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:52:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:52:53.933855 containerd[1770]: time="2025-03-17T17:52:53.933805867Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:52:53.981282 containerd[1770]: time="2025-03-17T17:52:53.981129158Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\"" Mar 17 17:52:53.983475 containerd[1770]: time="2025-03-17T17:52:53.983434922Z" level=info msg="StartContainer for \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\"" Mar 17 17:52:54.022411 systemd[1]: Started cri-containerd-023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4.scope - libcontainer container 023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4. Mar 17 17:52:54.049090 containerd[1770]: time="2025-03-17T17:52:54.049045122Z" level=info msg="StartContainer for \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\" returns successfully" Mar 17 17:52:54.058245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:52:54.059223 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:52:54.059559 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:52:54.064622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:52:54.068514 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:52:54.069136 systemd[1]: cri-containerd-023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4.scope: Deactivated successfully. Mar 17 17:52:54.088313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:52:54.103418 containerd[1770]: time="2025-03-17T17:52:54.103364301Z" level=info msg="shim disconnected" id=023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4 namespace=k8s.io Mar 17 17:52:54.103717 containerd[1770]: time="2025-03-17T17:52:54.103560581Z" level=warning msg="cleaning up after shim disconnected" id=023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4 namespace=k8s.io Mar 17 17:52:54.103717 containerd[1770]: time="2025-03-17T17:52:54.103575101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:54.936641 containerd[1770]: time="2025-03-17T17:52:54.936595457Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:52:54.964020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4-rootfs.mount: Deactivated successfully. Mar 17 17:52:54.999370 containerd[1770]: time="2025-03-17T17:52:54.999279572Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\"" Mar 17 17:52:54.999928 containerd[1770]: time="2025-03-17T17:52:54.999817212Z" level=info msg="StartContainer for \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\"" Mar 17 17:52:55.024458 systemd[1]: Started cri-containerd-c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f.scope - libcontainer container c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f. Mar 17 17:52:55.052371 systemd[1]: cri-containerd-c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f.scope: Deactivated successfully. Mar 17 17:52:55.055694 containerd[1770]: time="2025-03-17T17:52:55.054797273Z" level=info msg="StartContainer for \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\" returns successfully" Mar 17 17:52:55.089169 containerd[1770]: time="2025-03-17T17:52:55.089066535Z" level=info msg="shim disconnected" id=c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f namespace=k8s.io Mar 17 17:52:55.089169 containerd[1770]: time="2025-03-17T17:52:55.089166615Z" level=warning msg="cleaning up after shim disconnected" id=c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f namespace=k8s.io Mar 17 17:52:55.089169 containerd[1770]: time="2025-03-17T17:52:55.089175855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:55.942688 containerd[1770]: time="2025-03-17T17:52:55.942196135Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:52:55.962310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f-rootfs.mount: Deactivated successfully. Mar 17 17:52:55.977156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215539340.mount: Deactivated successfully. Mar 17 17:52:55.993624 containerd[1770]: time="2025-03-17T17:52:55.993538470Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\"" Mar 17 17:52:55.994951 containerd[1770]: time="2025-03-17T17:52:55.994130911Z" level=info msg="StartContainer for \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\"" Mar 17 17:52:56.019424 systemd[1]: Started cri-containerd-689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db.scope - libcontainer container 689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db. Mar 17 17:52:56.041096 systemd[1]: cri-containerd-689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db.scope: Deactivated successfully. Mar 17 17:52:56.047211 containerd[1770]: time="2025-03-17T17:52:56.047049008Z" level=info msg="StartContainer for \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\" returns successfully" Mar 17 17:52:56.081227 containerd[1770]: time="2025-03-17T17:52:56.081162631Z" level=info msg="shim disconnected" id=689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db namespace=k8s.io Mar 17 17:52:56.081227 containerd[1770]: time="2025-03-17T17:52:56.081217471Z" level=warning msg="cleaning up after shim disconnected" id=689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db namespace=k8s.io Mar 17 17:52:56.081227 containerd[1770]: time="2025-03-17T17:52:56.081225791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:56.945520 containerd[1770]: time="2025-03-17T17:52:56.945380778Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:52:56.964177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db-rootfs.mount: Deactivated successfully. Mar 17 17:52:56.987615 containerd[1770]: time="2025-03-17T17:52:56.987565536Z" level=info msg="CreateContainer within sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\"" Mar 17 17:52:56.988397 containerd[1770]: time="2025-03-17T17:52:56.988322537Z" level=info msg="StartContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\"" Mar 17 17:52:57.020439 systemd[1]: Started cri-containerd-9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514.scope - libcontainer container 9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514. Mar 17 17:52:57.051386 containerd[1770]: time="2025-03-17T17:52:57.051204773Z" level=info msg="StartContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" returns successfully" Mar 17 17:52:57.193191 kubelet[3440]: I0317 17:52:57.192914 3440 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:52:57.255615 kubelet[3440]: I0317 17:52:57.255426 3440 topology_manager.go:215] "Topology Admit Handler" podUID="5e281a91-d177-4300-addb-b2bd732a0970" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jmr4r" Mar 17 17:52:57.259910 kubelet[3440]: I0317 17:52:57.258387 3440 topology_manager.go:215] "Topology Admit Handler" podUID="da23ac2b-59a2-4afa-a0ea-94d33244da93" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p2nr4" Mar 17 17:52:57.274586 systemd[1]: Created slice kubepods-burstable-pod5e281a91_d177_4300_addb_b2bd732a0970.slice - libcontainer container kubepods-burstable-pod5e281a91_d177_4300_addb_b2bd732a0970.slice. Mar 17 17:52:57.289685 systemd[1]: Created slice kubepods-burstable-podda23ac2b_59a2_4afa_a0ea_94d33244da93.slice - libcontainer container kubepods-burstable-podda23ac2b_59a2_4afa_a0ea_94d33244da93.slice. Mar 17 17:52:57.371434 kubelet[3440]: I0317 17:52:57.371293 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcq9j\" (UniqueName: \"kubernetes.io/projected/da23ac2b-59a2-4afa-a0ea-94d33244da93-kube-api-access-kcq9j\") pod \"coredns-7db6d8ff4d-p2nr4\" (UID: \"da23ac2b-59a2-4afa-a0ea-94d33244da93\") " pod="kube-system/coredns-7db6d8ff4d-p2nr4" Mar 17 17:52:57.371434 kubelet[3440]: I0317 17:52:57.371390 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da23ac2b-59a2-4afa-a0ea-94d33244da93-config-volume\") pod \"coredns-7db6d8ff4d-p2nr4\" (UID: \"da23ac2b-59a2-4afa-a0ea-94d33244da93\") " pod="kube-system/coredns-7db6d8ff4d-p2nr4" Mar 17 17:52:57.371814 kubelet[3440]: I0317 17:52:57.371654 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e281a91-d177-4300-addb-b2bd732a0970-config-volume\") pod \"coredns-7db6d8ff4d-jmr4r\" (UID: \"5e281a91-d177-4300-addb-b2bd732a0970\") " pod="kube-system/coredns-7db6d8ff4d-jmr4r" Mar 17 17:52:57.371814 kubelet[3440]: I0317 17:52:57.371746 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnkd\" (UniqueName: \"kubernetes.io/projected/5e281a91-d177-4300-addb-b2bd732a0970-kube-api-access-gwnkd\") pod \"coredns-7db6d8ff4d-jmr4r\" (UID: \"5e281a91-d177-4300-addb-b2bd732a0970\") " pod="kube-system/coredns-7db6d8ff4d-jmr4r" Mar 17 17:52:57.586395 containerd[1770]: time="2025-03-17T17:52:57.585981035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmr4r,Uid:5e281a91-d177-4300-addb-b2bd732a0970,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:57.600306 containerd[1770]: time="2025-03-17T17:52:57.599947421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p2nr4,Uid:da23ac2b-59a2-4afa-a0ea-94d33244da93,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:57.872485 containerd[1770]: time="2025-03-17T17:52:57.871820520Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:57.877905 containerd[1770]: time="2025-03-17T17:52:57.877697971Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:52:57.880841 containerd[1770]: time="2025-03-17T17:52:57.880782897Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:57.882361 containerd[1770]: time="2025-03-17T17:52:57.882220339Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.327112235s" Mar 17 17:52:57.882361 containerd[1770]: time="2025-03-17T17:52:57.882265259Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:52:57.885699 containerd[1770]: time="2025-03-17T17:52:57.885656546Z" level=info msg="CreateContainer within sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:52:57.923569 containerd[1770]: time="2025-03-17T17:52:57.923519775Z" level=info msg="CreateContainer within sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\"" Mar 17 17:52:57.924586 containerd[1770]: time="2025-03-17T17:52:57.924557457Z" level=info msg="StartContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\"" Mar 17 17:52:57.947525 systemd[1]: Started cri-containerd-afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a.scope - libcontainer container afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a. Mar 17 17:52:57.975371 kubelet[3440]: I0317 17:52:57.975119 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v6bv7" podStartSLOduration=7.698229285 podStartE2EDuration="13.97509987s" podCreationTimestamp="2025-03-17 17:52:44 +0000 UTC" firstStartedPulling="2025-03-17 17:52:45.278004079 +0000 UTC m=+13.549155577" lastFinishedPulling="2025-03-17 17:52:51.554874664 +0000 UTC m=+19.826026162" observedRunningTime="2025-03-17 17:52:57.973976028 +0000 UTC m=+26.245127526" watchObservedRunningTime="2025-03-17 17:52:57.97509987 +0000 UTC m=+26.246251368" Mar 17 17:52:57.990178 containerd[1770]: time="2025-03-17T17:52:57.990128498Z" level=info msg="StartContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" returns successfully" Mar 17 17:53:02.108111 systemd-networkd[1515]: cilium_host: Link UP Mar 17 17:53:02.108217 systemd-networkd[1515]: cilium_net: Link UP Mar 17 17:53:02.108220 systemd-networkd[1515]: cilium_net: Gained carrier Mar 17 17:53:02.108402 systemd-networkd[1515]: cilium_host: Gained carrier Mar 17 17:53:02.264541 systemd-networkd[1515]: cilium_vxlan: Link UP Mar 17 17:53:02.264549 systemd-networkd[1515]: cilium_vxlan: Gained carrier Mar 17 17:53:02.539288 kernel: NET: Registered PF_ALG protocol family Mar 17 17:53:02.605392 systemd-networkd[1515]: cilium_host: Gained IPv6LL Mar 17 17:53:02.989406 systemd-networkd[1515]: cilium_net: Gained IPv6LL Mar 17 17:53:03.357516 systemd-networkd[1515]: lxc_health: Link UP Mar 17 17:53:03.366230 systemd-networkd[1515]: lxc_health: Gained carrier Mar 17 17:53:03.694817 systemd-networkd[1515]: lxc90468b608f4b: Link UP Mar 17 17:53:03.709289 kernel: eth0: renamed from tmp6dad0 Mar 17 17:53:03.718876 systemd-networkd[1515]: lxc90468b608f4b: Gained carrier Mar 17 17:53:03.728992 systemd-networkd[1515]: lxc03b9fe88ba4a: Link UP Mar 17 17:53:03.741290 kernel: eth0: renamed from tmp0e75a Mar 17 17:53:03.747026 systemd-networkd[1515]: lxc03b9fe88ba4a: Gained carrier Mar 17 17:53:04.141451 systemd-networkd[1515]: cilium_vxlan: Gained IPv6LL Mar 17 17:53:04.462381 systemd-networkd[1515]: lxc_health: Gained IPv6LL Mar 17 17:53:05.037409 systemd-networkd[1515]: lxc90468b608f4b: Gained IPv6LL Mar 17 17:53:05.185204 kubelet[3440]: I0317 17:53:05.184629 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fctkr" podStartSLOduration=7.820010997 podStartE2EDuration="20.184614614s" podCreationTimestamp="2025-03-17 17:52:45 +0000 UTC" firstStartedPulling="2025-03-17 17:52:45.518696524 +0000 UTC m=+13.789848022" lastFinishedPulling="2025-03-17 17:52:57.883300181 +0000 UTC m=+26.154451639" observedRunningTime="2025-03-17 17:52:58.967388493 +0000 UTC m=+27.238539991" watchObservedRunningTime="2025-03-17 17:53:05.184614614 +0000 UTC m=+33.455766112" Mar 17 17:53:05.357410 systemd-networkd[1515]: lxc03b9fe88ba4a: Gained IPv6LL Mar 17 17:53:07.375851 containerd[1770]: time="2025-03-17T17:53:07.375484452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:07.376637 containerd[1770]: time="2025-03-17T17:53:07.376306014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:07.376637 containerd[1770]: time="2025-03-17T17:53:07.376560414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:07.376797 containerd[1770]: time="2025-03-17T17:53:07.376756415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:07.410961 systemd[1]: Started cri-containerd-6dad08a9abefd906ef13e4e2fe38c93068b293351d6e32abd069953ebf210fe2.scope - libcontainer container 6dad08a9abefd906ef13e4e2fe38c93068b293351d6e32abd069953ebf210fe2. Mar 17 17:53:07.422759 containerd[1770]: time="2025-03-17T17:53:07.422376578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:07.422759 containerd[1770]: time="2025-03-17T17:53:07.422447898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:07.422759 containerd[1770]: time="2025-03-17T17:53:07.422463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:07.423311 containerd[1770]: time="2025-03-17T17:53:07.422896579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:07.448457 systemd[1]: Started cri-containerd-0e75a477287fb1c54745b278416796dc68287d470bfc54df5dd65b6fde792ee9.scope - libcontainer container 0e75a477287fb1c54745b278416796dc68287d470bfc54df5dd65b6fde792ee9. Mar 17 17:53:07.484185 containerd[1770]: time="2025-03-17T17:53:07.484144651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jmr4r,Uid:5e281a91-d177-4300-addb-b2bd732a0970,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dad08a9abefd906ef13e4e2fe38c93068b293351d6e32abd069953ebf210fe2\"" Mar 17 17:53:07.488899 containerd[1770]: time="2025-03-17T17:53:07.488857179Z" level=info msg="CreateContainer within sandbox \"6dad08a9abefd906ef13e4e2fe38c93068b293351d6e32abd069953ebf210fe2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:07.507161 containerd[1770]: time="2025-03-17T17:53:07.506983132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p2nr4,Uid:da23ac2b-59a2-4afa-a0ea-94d33244da93,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e75a477287fb1c54745b278416796dc68287d470bfc54df5dd65b6fde792ee9\"" Mar 17 17:53:07.511473 containerd[1770]: time="2025-03-17T17:53:07.511443420Z" level=info msg="CreateContainer within sandbox \"0e75a477287fb1c54745b278416796dc68287d470bfc54df5dd65b6fde792ee9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:07.567549 containerd[1770]: time="2025-03-17T17:53:07.567476883Z" level=info msg="CreateContainer within sandbox \"6dad08a9abefd906ef13e4e2fe38c93068b293351d6e32abd069953ebf210fe2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7cd6bf433798d148ec2455f1924dd0e4578a1bfe69fe50321e230f550cdddb8\"" Mar 17 17:53:07.568334 containerd[1770]: time="2025-03-17T17:53:07.568297004Z" level=info msg="StartContainer for \"f7cd6bf433798d148ec2455f1924dd0e4578a1bfe69fe50321e230f550cdddb8\"" Mar 17 17:53:07.576135 containerd[1770]: time="2025-03-17T17:53:07.576095298Z" level=info msg="CreateContainer within sandbox \"0e75a477287fb1c54745b278416796dc68287d470bfc54df5dd65b6fde792ee9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b44d5cd806de48aecbb6b2d6c042f11dba4abd10d60977402c07df8ec5f15bb2\"" Mar 17 17:53:07.578352 containerd[1770]: time="2025-03-17T17:53:07.576929140Z" level=info msg="StartContainer for \"b44d5cd806de48aecbb6b2d6c042f11dba4abd10d60977402c07df8ec5f15bb2\"" Mar 17 17:53:07.592423 systemd[1]: Started cri-containerd-f7cd6bf433798d148ec2455f1924dd0e4578a1bfe69fe50321e230f550cdddb8.scope - libcontainer container f7cd6bf433798d148ec2455f1924dd0e4578a1bfe69fe50321e230f550cdddb8. Mar 17 17:53:07.607443 systemd[1]: Started cri-containerd-b44d5cd806de48aecbb6b2d6c042f11dba4abd10d60977402c07df8ec5f15bb2.scope - libcontainer container b44d5cd806de48aecbb6b2d6c042f11dba4abd10d60977402c07df8ec5f15bb2. Mar 17 17:53:07.632920 containerd[1770]: time="2025-03-17T17:53:07.632628482Z" level=info msg="StartContainer for \"f7cd6bf433798d148ec2455f1924dd0e4578a1bfe69fe50321e230f550cdddb8\" returns successfully" Mar 17 17:53:07.646038 containerd[1770]: time="2025-03-17T17:53:07.645918266Z" level=info msg="StartContainer for \"b44d5cd806de48aecbb6b2d6c042f11dba4abd10d60977402c07df8ec5f15bb2\" returns successfully" Mar 17 17:53:07.989972 kubelet[3440]: I0317 17:53:07.989785 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p2nr4" podStartSLOduration=22.989769653 podStartE2EDuration="22.989769653s" podCreationTimestamp="2025-03-17 17:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:07.986486527 +0000 UTC m=+36.257638025" watchObservedRunningTime="2025-03-17 17:53:07.989769653 +0000 UTC m=+36.260921151" Mar 17 17:53:08.044007 kubelet[3440]: I0317 17:53:08.042669 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jmr4r" podStartSLOduration=23.04264947 podStartE2EDuration="23.04264947s" podCreationTimestamp="2025-03-17 17:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:08.013582777 +0000 UTC m=+36.284734275" watchObservedRunningTime="2025-03-17 17:53:08.04264947 +0000 UTC m=+36.313800968" Mar 17 17:53:32.705278 waagent[1970]: 2025-03-17T17:53:32.704508Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 17:53:32.713732 waagent[1970]: 2025-03-17T17:53:32.713533Z INFO ExtHandler Mar 17 17:53:32.713732 waagent[1970]: 2025-03-17T17:53:32.713645Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1b7095cd-0fe7-400a-9f57-a1afd1ae25a9 eTag: 5641262380218313931 source: Fabric] Mar 17 17:53:32.714035 waagent[1970]: 2025-03-17T17:53:32.713984Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:53:32.714649 waagent[1970]: 2025-03-17T17:53:32.714599Z INFO ExtHandler Mar 17 17:53:32.714718 waagent[1970]: 2025-03-17T17:53:32.714688Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 17:53:32.776619 waagent[1970]: 2025-03-17T17:53:32.776572Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:53:32.855315 waagent[1970]: 2025-03-17T17:53:32.854782Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:53:32.855315 waagent[1970]: 2025-03-17T17:53:32.855226Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:53:32.855763 waagent[1970]: 2025-03-17T17:53:32.855708Z INFO ExtHandler Fetch goal state completed Mar 17 17:53:32.856136 waagent[1970]: 2025-03-17T17:53:32.856061Z INFO ExtHandler ExtHandler Mar 17 17:53:32.856218 waagent[1970]: 2025-03-17T17:53:32.856182Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 301465f6-e418-4a8c-abec-cf26c467ee36 correlation 67c987f7-bbfe-4e5b-9960-55a34bb768d2 created: 2025-03-17T17:53:22.644005Z] Mar 17 17:53:32.856606 waagent[1970]: 2025-03-17T17:53:32.856560Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:53:32.857131 waagent[1970]: 2025-03-17T17:53:32.857093Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 17 17:54:48.498587 systemd[1]: Started sshd@7-10.200.20.14:22-139.59.79.179:52855.service - OpenSSH per-connection server daemon (139.59.79.179:52855). Mar 17 17:54:48.754210 sshd[4831]: Connection closed by 139.59.79.179 port 52855 Mar 17 17:54:48.755074 systemd[1]: sshd@7-10.200.20.14:22-139.59.79.179:52855.service: Deactivated successfully. Mar 17 17:54:59.235528 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:52876.service - OpenSSH per-connection server daemon (10.200.16.10:52876). Mar 17 17:54:59.681335 sshd[4836]: Accepted publickey for core from 10.200.16.10 port 52876 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:54:59.682738 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:59.687370 systemd-logind[1736]: New session 10 of user core. Mar 17 17:54:59.694485 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:55:00.110447 sshd[4838]: Connection closed by 10.200.16.10 port 52876 Mar 17 17:55:00.109470 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:00.112608 systemd-logind[1736]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:55:00.112784 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:52876.service: Deactivated successfully. Mar 17 17:55:00.115143 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:55:00.117224 systemd-logind[1736]: Removed session 10. Mar 17 17:55:05.201577 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:52878.service - OpenSSH per-connection server daemon (10.200.16.10:52878). Mar 17 17:55:05.685776 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 52878 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:05.687205 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:05.692316 systemd-logind[1736]: New session 11 of user core. Mar 17 17:55:05.697462 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:55:06.110225 sshd[4853]: Connection closed by 10.200.16.10 port 52878 Mar 17 17:55:06.110064 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:06.112929 systemd-logind[1736]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:55:06.113311 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:52878.service: Deactivated successfully. Mar 17 17:55:06.115345 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:55:06.117128 systemd-logind[1736]: Removed session 11. Mar 17 17:55:11.205522 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:46206.service - OpenSSH per-connection server daemon (10.200.16.10:46206). Mar 17 17:55:11.694706 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 46206 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:11.696000 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:11.700799 systemd-logind[1736]: New session 12 of user core. Mar 17 17:55:11.706418 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:55:12.098610 sshd[4868]: Connection closed by 10.200.16.10 port 46206 Mar 17 17:55:12.099479 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:12.102407 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:46206.service: Deactivated successfully. Mar 17 17:55:12.104852 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:55:12.106743 systemd-logind[1736]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:55:12.107708 systemd-logind[1736]: Removed session 12. Mar 17 17:55:17.185547 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:46210.service - OpenSSH per-connection server daemon (10.200.16.10:46210). Mar 17 17:55:17.633174 sshd[4884]: Accepted publickey for core from 10.200.16.10 port 46210 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:17.634520 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:17.638989 systemd-logind[1736]: New session 13 of user core. Mar 17 17:55:17.646506 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:55:18.015554 sshd[4886]: Connection closed by 10.200.16.10 port 46210 Mar 17 17:55:18.014860 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:18.018721 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:46210.service: Deactivated successfully. Mar 17 17:55:18.020618 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:55:18.021829 systemd-logind[1736]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:55:18.022752 systemd-logind[1736]: Removed session 13. Mar 17 17:55:23.107524 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:48218.service - OpenSSH per-connection server daemon (10.200.16.10:48218). Mar 17 17:55:23.553876 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 48218 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:23.555318 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:23.560564 systemd-logind[1736]: New session 14 of user core. Mar 17 17:55:23.567404 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:55:23.943936 sshd[4901]: Connection closed by 10.200.16.10 port 48218 Mar 17 17:55:23.943472 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:23.945927 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:48218.service: Deactivated successfully. Mar 17 17:55:23.947879 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:55:23.949885 systemd-logind[1736]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:55:23.951094 systemd-logind[1736]: Removed session 14. Mar 17 17:55:29.035508 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:46424.service - OpenSSH per-connection server daemon (10.200.16.10:46424). Mar 17 17:55:29.481392 sshd[4913]: Accepted publickey for core from 10.200.16.10 port 46424 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:29.482700 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:29.487489 systemd-logind[1736]: New session 15 of user core. Mar 17 17:55:29.492418 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:55:29.865196 sshd[4915]: Connection closed by 10.200.16.10 port 46424 Mar 17 17:55:29.865894 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:29.869488 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:46424.service: Deactivated successfully. Mar 17 17:55:29.872103 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:55:29.873297 systemd-logind[1736]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:55:29.874290 systemd-logind[1736]: Removed session 15. Mar 17 17:55:29.958516 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:46428.service - OpenSSH per-connection server daemon (10.200.16.10:46428). Mar 17 17:55:30.448056 sshd[4927]: Accepted publickey for core from 10.200.16.10 port 46428 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:30.449501 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:30.455414 systemd-logind[1736]: New session 16 of user core. Mar 17 17:55:30.458451 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:55:30.890419 sshd[4929]: Connection closed by 10.200.16.10 port 46428 Mar 17 17:55:30.891128 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:30.893715 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:46428.service: Deactivated successfully. Mar 17 17:55:30.895539 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:55:30.897858 systemd-logind[1736]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:55:30.898964 systemd-logind[1736]: Removed session 16. Mar 17 17:55:30.979498 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:46430.service - OpenSSH per-connection server daemon (10.200.16.10:46430). Mar 17 17:55:31.426175 sshd[4939]: Accepted publickey for core from 10.200.16.10 port 46430 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:31.427669 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:31.431521 systemd-logind[1736]: New session 17 of user core. Mar 17 17:55:31.442471 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:55:31.810848 sshd[4941]: Connection closed by 10.200.16.10 port 46430 Mar 17 17:55:31.810737 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:31.814294 systemd-logind[1736]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:55:31.814905 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:46430.service: Deactivated successfully. Mar 17 17:55:31.816769 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:55:31.818023 systemd-logind[1736]: Removed session 17. Mar 17 17:55:36.892331 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:46432.service - OpenSSH per-connection server daemon (10.200.16.10:46432). Mar 17 17:55:37.340829 sshd[4955]: Accepted publickey for core from 10.200.16.10 port 46432 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:37.342172 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:37.346397 systemd-logind[1736]: New session 18 of user core. Mar 17 17:55:37.356350 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:55:37.726289 sshd[4957]: Connection closed by 10.200.16.10 port 46432 Mar 17 17:55:37.726849 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:37.730241 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:46432.service: Deactivated successfully. Mar 17 17:55:37.731982 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:55:37.733841 systemd-logind[1736]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:55:37.734746 systemd-logind[1736]: Removed session 18. Mar 17 17:55:42.824601 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:53358.service - OpenSSH per-connection server daemon (10.200.16.10:53358). Mar 17 17:55:43.313313 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 53358 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:43.314663 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:43.319203 systemd-logind[1736]: New session 19 of user core. Mar 17 17:55:43.324418 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:55:43.720189 sshd[4971]: Connection closed by 10.200.16.10 port 53358 Mar 17 17:55:43.720838 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:43.723544 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:53358.service: Deactivated successfully. Mar 17 17:55:43.726554 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:55:43.728412 systemd-logind[1736]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:55:43.729844 systemd-logind[1736]: Removed session 19. Mar 17 17:55:43.824512 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:53364.service - OpenSSH per-connection server daemon (10.200.16.10:53364). Mar 17 17:55:44.271335 sshd[4982]: Accepted publickey for core from 10.200.16.10 port 53364 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:44.273918 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:44.279444 systemd-logind[1736]: New session 20 of user core. Mar 17 17:55:44.282421 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:55:44.697097 sshd[4984]: Connection closed by 10.200.16.10 port 53364 Mar 17 17:55:44.697692 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:44.701029 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:53364.service: Deactivated successfully. Mar 17 17:55:44.703756 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:55:44.705513 systemd-logind[1736]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:55:44.707125 systemd-logind[1736]: Removed session 20. Mar 17 17:55:44.794669 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:53366.service - OpenSSH per-connection server daemon (10.200.16.10:53366). Mar 17 17:55:45.277066 sshd[4993]: Accepted publickey for core from 10.200.16.10 port 53366 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:45.278492 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:45.282847 systemd-logind[1736]: New session 21 of user core. Mar 17 17:55:45.289467 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:55:47.083325 sshd[4995]: Connection closed by 10.200.16.10 port 53366 Mar 17 17:55:47.083992 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:47.087757 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:53366.service: Deactivated successfully. Mar 17 17:55:47.090927 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:55:47.091873 systemd-logind[1736]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:55:47.093231 systemd-logind[1736]: Removed session 21. Mar 17 17:55:47.184717 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:53372.service - OpenSSH per-connection server daemon (10.200.16.10:53372). Mar 17 17:55:47.669575 sshd[5014]: Accepted publickey for core from 10.200.16.10 port 53372 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:47.670497 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:47.675145 systemd-logind[1736]: New session 22 of user core. Mar 17 17:55:47.678425 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:55:48.187485 sshd[5016]: Connection closed by 10.200.16.10 port 53372 Mar 17 17:55:48.188089 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:48.191679 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:53372.service: Deactivated successfully. Mar 17 17:55:48.193891 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:55:48.194767 systemd-logind[1736]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:55:48.195831 systemd-logind[1736]: Removed session 22. Mar 17 17:55:48.279529 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:53376.service - OpenSSH per-connection server daemon (10.200.16.10:53376). Mar 17 17:55:48.724786 sshd[5026]: Accepted publickey for core from 10.200.16.10 port 53376 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:48.726097 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:48.730543 systemd-logind[1736]: New session 23 of user core. Mar 17 17:55:48.739562 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:55:49.105359 sshd[5028]: Connection closed by 10.200.16.10 port 53376 Mar 17 17:55:49.106213 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:49.108852 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:53376.service: Deactivated successfully. Mar 17 17:55:49.111110 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:55:49.112674 systemd-logind[1736]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:55:49.113949 systemd-logind[1736]: Removed session 23. Mar 17 17:55:54.212864 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:40924.service - OpenSSH per-connection server daemon (10.200.16.10:40924). Mar 17 17:55:54.695773 sshd[5043]: Accepted publickey for core from 10.200.16.10 port 40924 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:54.697104 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:54.702072 systemd-logind[1736]: New session 24 of user core. Mar 17 17:55:54.710426 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:55:55.116823 sshd[5045]: Connection closed by 10.200.16.10 port 40924 Mar 17 17:55:55.117371 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:55.120723 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:40924.service: Deactivated successfully. Mar 17 17:55:55.122958 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:55:55.124090 systemd-logind[1736]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:55:55.125021 systemd-logind[1736]: Removed session 24. Mar 17 17:56:00.198735 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:50628.service - OpenSSH per-connection server daemon (10.200.16.10:50628). Mar 17 17:56:00.651453 sshd[5057]: Accepted publickey for core from 10.200.16.10 port 50628 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:00.652916 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:00.658085 systemd-logind[1736]: New session 25 of user core. Mar 17 17:56:00.663415 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:56:01.032313 sshd[5059]: Connection closed by 10.200.16.10 port 50628 Mar 17 17:56:01.033115 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:01.035605 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:50628.service: Deactivated successfully. Mar 17 17:56:01.037659 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:56:01.039659 systemd-logind[1736]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:56:01.040947 systemd-logind[1736]: Removed session 25. Mar 17 17:56:06.124489 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:50642.service - OpenSSH per-connection server daemon (10.200.16.10:50642). Mar 17 17:56:06.610352 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 50642 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:06.611662 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:06.617186 systemd-logind[1736]: New session 26 of user core. Mar 17 17:56:06.624559 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:56:07.008944 sshd[5072]: Connection closed by 10.200.16.10 port 50642 Mar 17 17:56:07.009556 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:07.012977 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:50642.service: Deactivated successfully. Mar 17 17:56:07.014973 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:56:07.015970 systemd-logind[1736]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:56:07.017078 systemd-logind[1736]: Removed session 26. Mar 17 17:56:07.095534 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:50648.service - OpenSSH per-connection server daemon (10.200.16.10:50648). Mar 17 17:56:07.540556 sshd[5084]: Accepted publickey for core from 10.200.16.10 port 50648 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:07.541841 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:07.545948 systemd-logind[1736]: New session 27 of user core. Mar 17 17:56:07.556440 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:56:10.573568 containerd[1770]: time="2025-03-17T17:56:10.573520003Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:56:10.577643 containerd[1770]: time="2025-03-17T17:56:10.577596171Z" level=info msg="StopContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" with timeout 30 (s)" Mar 17 17:56:10.579659 containerd[1770]: time="2025-03-17T17:56:10.579606814Z" level=info msg="Stop container \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" with signal terminated" Mar 17 17:56:10.587372 containerd[1770]: time="2025-03-17T17:56:10.587225829Z" level=info msg="StopContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" with timeout 2 (s)" Mar 17 17:56:10.587775 containerd[1770]: time="2025-03-17T17:56:10.587720470Z" level=info msg="Stop container \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" with signal terminated" Mar 17 17:56:10.596111 systemd[1]: cri-containerd-afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a.scope: Deactivated successfully. Mar 17 17:56:10.598982 systemd-networkd[1515]: lxc_health: Link DOWN Mar 17 17:56:10.598988 systemd-networkd[1515]: lxc_health: Lost carrier Mar 17 17:56:10.618771 systemd[1]: cri-containerd-9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514.scope: Deactivated successfully. Mar 17 17:56:10.621341 systemd[1]: cri-containerd-9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514.scope: Consumed 6.494s CPU time, 126.7M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:56:10.627524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a-rootfs.mount: Deactivated successfully. Mar 17 17:56:10.642684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514-rootfs.mount: Deactivated successfully. Mar 17 17:56:10.696195 containerd[1770]: time="2025-03-17T17:56:10.695859558Z" level=info msg="shim disconnected" id=9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514 namespace=k8s.io Mar 17 17:56:10.696195 containerd[1770]: time="2025-03-17T17:56:10.695911238Z" level=warning msg="cleaning up after shim disconnected" id=9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514 namespace=k8s.io Mar 17 17:56:10.696195 containerd[1770]: time="2025-03-17T17:56:10.695924318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:10.697702 containerd[1770]: time="2025-03-17T17:56:10.697539761Z" level=info msg="shim disconnected" id=afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a namespace=k8s.io Mar 17 17:56:10.697702 containerd[1770]: time="2025-03-17T17:56:10.697577401Z" level=warning msg="cleaning up after shim disconnected" id=afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a namespace=k8s.io Mar 17 17:56:10.697702 containerd[1770]: time="2025-03-17T17:56:10.697586881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:10.712097 containerd[1770]: time="2025-03-17T17:56:10.712032549Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:56:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:56:10.720007 containerd[1770]: time="2025-03-17T17:56:10.719967524Z" level=info msg="StopContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" returns successfully" Mar 17 17:56:10.720734 containerd[1770]: time="2025-03-17T17:56:10.720682725Z" level=info msg="StopPodSandbox for \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\"" Mar 17 17:56:10.720967 containerd[1770]: time="2025-03-17T17:56:10.720714606Z" level=info msg="Container to stop \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.720967 containerd[1770]: time="2025-03-17T17:56:10.720891166Z" level=info msg="Container to stop \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.720967 containerd[1770]: time="2025-03-17T17:56:10.720903966Z" level=info msg="Container to stop \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.720967 containerd[1770]: time="2025-03-17T17:56:10.720912006Z" level=info msg="Container to stop \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.720967 containerd[1770]: time="2025-03-17T17:56:10.720921926Z" level=info msg="Container to stop \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.722942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb-shm.mount: Deactivated successfully. Mar 17 17:56:10.725315 containerd[1770]: time="2025-03-17T17:56:10.724529813Z" level=info msg="StopContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" returns successfully" Mar 17 17:56:10.725760 containerd[1770]: time="2025-03-17T17:56:10.725685295Z" level=info msg="StopPodSandbox for \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\"" Mar 17 17:56:10.726016 containerd[1770]: time="2025-03-17T17:56:10.725884455Z" level=info msg="Container to stop \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:10.727725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f-shm.mount: Deactivated successfully. Mar 17 17:56:10.731466 systemd[1]: cri-containerd-d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb.scope: Deactivated successfully. Mar 17 17:56:10.738921 systemd[1]: cri-containerd-fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f.scope: Deactivated successfully. Mar 17 17:56:10.777110 containerd[1770]: time="2025-03-17T17:56:10.776942874Z" level=info msg="shim disconnected" id=fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f namespace=k8s.io Mar 17 17:56:10.777110 containerd[1770]: time="2025-03-17T17:56:10.776994874Z" level=warning msg="cleaning up after shim disconnected" id=fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f namespace=k8s.io Mar 17 17:56:10.777110 containerd[1770]: time="2025-03-17T17:56:10.777003194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:10.777707 containerd[1770]: time="2025-03-17T17:56:10.777653755Z" level=info msg="shim disconnected" id=d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb namespace=k8s.io Mar 17 17:56:10.777707 containerd[1770]: time="2025-03-17T17:56:10.777698755Z" level=warning msg="cleaning up after shim disconnected" id=d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb namespace=k8s.io Mar 17 17:56:10.777786 containerd[1770]: time="2025-03-17T17:56:10.777707435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:10.792105 containerd[1770]: time="2025-03-17T17:56:10.791651422Z" level=info msg="TearDown network for sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" successfully" Mar 17 17:56:10.792105 containerd[1770]: time="2025-03-17T17:56:10.791687782Z" level=info msg="StopPodSandbox for \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" returns successfully" Mar 17 17:56:10.792542 containerd[1770]: time="2025-03-17T17:56:10.792480343Z" level=info msg="TearDown network for sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" successfully" Mar 17 17:56:10.792542 containerd[1770]: time="2025-03-17T17:56:10.792505623Z" level=info msg="StopPodSandbox for \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" returns successfully" Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856236 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzm5z\" (UniqueName: \"kubernetes.io/projected/a5d41034-9ac4-4f77-b122-2b32312efd8e-kube-api-access-vzm5z\") pod \"a5d41034-9ac4-4f77-b122-2b32312efd8e\" (UID: \"a5d41034-9ac4-4f77-b122-2b32312efd8e\") " Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856305 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5d41034-9ac4-4f77-b122-2b32312efd8e-cilium-config-path\") pod \"a5d41034-9ac4-4f77-b122-2b32312efd8e\" (UID: \"a5d41034-9ac4-4f77-b122-2b32312efd8e\") " Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856325 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-config-path\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856341 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-xtables-lock\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856358 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-clustermesh-secrets\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.856701 kubelet[3440]: I0317 17:56:10.856373 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-kernel\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856389 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-lib-modules\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856409 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhwqf\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-kube-api-access-dhwqf\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856427 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-cgroup\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856444 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-net\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856461 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-bpf-maps\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.857173 kubelet[3440]: I0317 17:56:10.856474 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-etc-cni-netd\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.858480 kubelet[3440]: I0317 17:56:10.856492 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-run\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.858480 kubelet[3440]: I0317 17:56:10.856509 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cni-path\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.858480 kubelet[3440]: I0317 17:56:10.856525 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hubble-tls\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.858480 kubelet[3440]: I0317 17:56:10.856540 3440 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hostproc\") pod \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\" (UID: \"f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5\") " Mar 17 17:56:10.858480 kubelet[3440]: I0317 17:56:10.856599 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.859620 kubelet[3440]: I0317 17:56:10.859584 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d41034-9ac4-4f77-b122-2b32312efd8e-kube-api-access-vzm5z" (OuterVolumeSpecName: "kube-api-access-vzm5z") pod "a5d41034-9ac4-4f77-b122-2b32312efd8e" (UID: "a5d41034-9ac4-4f77-b122-2b32312efd8e"). InnerVolumeSpecName "kube-api-access-vzm5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:10.859755 kubelet[3440]: I0317 17:56:10.859640 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5d41034-9ac4-4f77-b122-2b32312efd8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5d41034-9ac4-4f77-b122-2b32312efd8e" (UID: "a5d41034-9ac4-4f77-b122-2b32312efd8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:10.859800 kubelet[3440]: I0317 17:56:10.859765 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.859800 kubelet[3440]: I0317 17:56:10.859794 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860509 kubelet[3440]: I0317 17:56:10.860409 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860509 kubelet[3440]: I0317 17:56:10.860451 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860763 kubelet[3440]: I0317 17:56:10.860617 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860763 kubelet[3440]: I0317 17:56:10.860651 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860763 kubelet[3440]: I0317 17:56:10.860682 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860763 kubelet[3440]: I0317 17:56:10.860698 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.860763 kubelet[3440]: I0317 17:56:10.860721 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:10.864084 kubelet[3440]: I0317 17:56:10.864034 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:56:10.865511 kubelet[3440]: I0317 17:56:10.865461 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:10.865511 kubelet[3440]: I0317 17:56:10.865467 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-kube-api-access-dhwqf" (OuterVolumeSpecName: "kube-api-access-dhwqf") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "kube-api-access-dhwqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:10.865698 kubelet[3440]: I0317 17:56:10.865671 3440 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" (UID: "f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:10.956828 kubelet[3440]: I0317 17:56:10.956780 3440 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-run\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.956980 3440 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cni-path\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.956994 3440 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hubble-tls\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957002 3440 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-hostproc\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957011 3440 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vzm5z\" (UniqueName: \"kubernetes.io/projected/a5d41034-9ac4-4f77-b122-2b32312efd8e-kube-api-access-vzm5z\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957020 3440 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5d41034-9ac4-4f77-b122-2b32312efd8e-cilium-config-path\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957028 3440 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-config-path\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957037 3440 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-xtables-lock\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957156 kubelet[3440]: I0317 17:56:10.957045 3440 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-clustermesh-secrets\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957054 3440 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-kernel\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957087 3440 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-lib-modules\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957096 3440 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dhwqf\" (UniqueName: \"kubernetes.io/projected/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-kube-api-access-dhwqf\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957115 3440 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-cilium-cgroup\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957125 3440 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-host-proc-sys-net\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957132 3440 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-bpf-maps\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:10.957400 kubelet[3440]: I0317 17:56:10.957142 3440 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5-etc-cni-netd\") on node \"ci-4230.1.0-a-3f2b416a0a\" DevicePath \"\"" Mar 17 17:56:11.304150 kubelet[3440]: I0317 17:56:11.304057 3440 scope.go:117] "RemoveContainer" containerID="afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a" Mar 17 17:56:11.310360 containerd[1770]: time="2025-03-17T17:56:11.310038538Z" level=info msg="RemoveContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\"" Mar 17 17:56:11.314784 systemd[1]: Removed slice kubepods-besteffort-poda5d41034_9ac4_4f77_b122_2b32312efd8e.slice - libcontainer container kubepods-besteffort-poda5d41034_9ac4_4f77_b122_2b32312efd8e.slice. Mar 17 17:56:11.322926 containerd[1770]: time="2025-03-17T17:56:11.322786602Z" level=info msg="RemoveContainer for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" returns successfully" Mar 17 17:56:11.323205 kubelet[3440]: I0317 17:56:11.323188 3440 scope.go:117] "RemoveContainer" containerID="afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a" Mar 17 17:56:11.324314 containerd[1770]: time="2025-03-17T17:56:11.324273285Z" level=error msg="ContainerStatus for \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\": not found" Mar 17 17:56:11.325323 kubelet[3440]: E0317 17:56:11.324671 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\": not found" containerID="afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a" Mar 17 17:56:11.325323 kubelet[3440]: I0317 17:56:11.324701 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a"} err="failed to get container status \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\": rpc error: code = NotFound desc = an error occurred when try to find container \"afb83efc5cb32ab1061bbb6bbd432979abf313d5aefd156f9cb621e2ca5ce56a\": not found" Mar 17 17:56:11.325323 kubelet[3440]: I0317 17:56:11.324783 3440 scope.go:117] "RemoveContainer" containerID="9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514" Mar 17 17:56:11.326851 systemd[1]: Removed slice kubepods-burstable-podf6540c1c_eef8_4eb8_ab0d_b36d01e78ac5.slice - libcontainer container kubepods-burstable-podf6540c1c_eef8_4eb8_ab0d_b36d01e78ac5.slice. Mar 17 17:56:11.327195 systemd[1]: kubepods-burstable-podf6540c1c_eef8_4eb8_ab0d_b36d01e78ac5.slice: Consumed 6.561s CPU time, 127.1M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:56:11.329305 containerd[1770]: time="2025-03-17T17:56:11.329180214Z" level=info msg="RemoveContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\"" Mar 17 17:56:11.346945 containerd[1770]: time="2025-03-17T17:56:11.346904488Z" level=info msg="RemoveContainer for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" returns successfully" Mar 17 17:56:11.347204 kubelet[3440]: I0317 17:56:11.347131 3440 scope.go:117] "RemoveContainer" containerID="689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db" Mar 17 17:56:11.349364 containerd[1770]: time="2025-03-17T17:56:11.348954372Z" level=info msg="RemoveContainer for \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\"" Mar 17 17:56:11.361486 containerd[1770]: time="2025-03-17T17:56:11.360523635Z" level=info msg="RemoveContainer for \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\" returns successfully" Mar 17 17:56:11.361649 kubelet[3440]: I0317 17:56:11.360911 3440 scope.go:117] "RemoveContainer" containerID="c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f" Mar 17 17:56:11.363717 containerd[1770]: time="2025-03-17T17:56:11.363468440Z" level=info msg="RemoveContainer for \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\"" Mar 17 17:56:11.374626 containerd[1770]: time="2025-03-17T17:56:11.374509421Z" level=info msg="RemoveContainer for \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\" returns successfully" Mar 17 17:56:11.374772 kubelet[3440]: I0317 17:56:11.374740 3440 scope.go:117] "RemoveContainer" containerID="023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4" Mar 17 17:56:11.375888 containerd[1770]: time="2025-03-17T17:56:11.375787104Z" level=info msg="RemoveContainer for \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\"" Mar 17 17:56:11.388921 containerd[1770]: time="2025-03-17T17:56:11.388878489Z" level=info msg="RemoveContainer for \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\" returns successfully" Mar 17 17:56:11.389273 kubelet[3440]: I0317 17:56:11.389138 3440 scope.go:117] "RemoveContainer" containerID="978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b" Mar 17 17:56:11.390562 containerd[1770]: time="2025-03-17T17:56:11.390535172Z" level=info msg="RemoveContainer for \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\"" Mar 17 17:56:11.413311 containerd[1770]: time="2025-03-17T17:56:11.413240096Z" level=info msg="RemoveContainer for \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\" returns successfully" Mar 17 17:56:11.413539 kubelet[3440]: I0317 17:56:11.413512 3440 scope.go:117] "RemoveContainer" containerID="9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514" Mar 17 17:56:11.413965 containerd[1770]: time="2025-03-17T17:56:11.413884337Z" level=error msg="ContainerStatus for \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\": not found" Mar 17 17:56:11.414056 kubelet[3440]: E0317 17:56:11.413994 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\": not found" containerID="9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514" Mar 17 17:56:11.414056 kubelet[3440]: I0317 17:56:11.414018 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514"} err="failed to get container status \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\": rpc error: code = NotFound desc = an error occurred when try to find container \"9873da993223fa597088a037ae2db02f9b667f648631a1e0f99f4caf37d4e514\": not found" Mar 17 17:56:11.414056 kubelet[3440]: I0317 17:56:11.414038 3440 scope.go:117] "RemoveContainer" containerID="689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db" Mar 17 17:56:11.414384 containerd[1770]: time="2025-03-17T17:56:11.414296338Z" level=error msg="ContainerStatus for \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\": not found" Mar 17 17:56:11.414447 kubelet[3440]: E0317 17:56:11.414400 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\": not found" containerID="689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db" Mar 17 17:56:11.414447 kubelet[3440]: I0317 17:56:11.414421 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db"} err="failed to get container status \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\": rpc error: code = NotFound desc = an error occurred when try to find container \"689f9e454bbd5c3f2de0b571f7a8e24ae8299411e3d270f567025bf57f5404db\": not found" Mar 17 17:56:11.414447 kubelet[3440]: I0317 17:56:11.414433 3440 scope.go:117] "RemoveContainer" containerID="c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f" Mar 17 17:56:11.414794 kubelet[3440]: E0317 17:56:11.414663 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\": not found" containerID="c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f" Mar 17 17:56:11.414794 kubelet[3440]: I0317 17:56:11.414678 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f"} err="failed to get container status \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\": not found" Mar 17 17:56:11.414794 kubelet[3440]: I0317 17:56:11.414691 3440 scope.go:117] "RemoveContainer" containerID="023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4" Mar 17 17:56:11.414794 kubelet[3440]: E0317 17:56:11.414999 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\": not found" containerID="023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4" Mar 17 17:56:11.414794 kubelet[3440]: I0317 17:56:11.415017 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4"} err="failed to get container status \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\": not found" Mar 17 17:56:11.414794 kubelet[3440]: I0317 17:56:11.415030 3440 scope.go:117] "RemoveContainer" containerID="978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b" Mar 17 17:56:11.415244 containerd[1770]: time="2025-03-17T17:56:11.414568218Z" level=error msg="ContainerStatus for \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c950cbb00c871214230ff9dc9c95411753156b59eac568e7fbaad25ae27ffa8f\": not found" Mar 17 17:56:11.415244 containerd[1770]: time="2025-03-17T17:56:11.414894259Z" level=error msg="ContainerStatus for \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"023d685709989121514357843ebd4eaec9501bc3470b6b13255cba6a650758a4\": not found" Mar 17 17:56:11.415656 containerd[1770]: time="2025-03-17T17:56:11.415481660Z" level=error msg="ContainerStatus for \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\": not found" Mar 17 17:56:11.415719 kubelet[3440]: E0317 17:56:11.415590 3440 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\": not found" containerID="978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b" Mar 17 17:56:11.415719 kubelet[3440]: I0317 17:56:11.415608 3440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b"} err="failed to get container status \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\": rpc error: code = NotFound desc = an error occurred when try to find container \"978596ed5b42cac14bd7e9352ee1c13291b098dd5d9b2645fabf2c0bab46819b\": not found" Mar 17 17:56:11.562276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f-rootfs.mount: Deactivated successfully. Mar 17 17:56:11.562370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb-rootfs.mount: Deactivated successfully. Mar 17 17:56:11.562433 systemd[1]: var-lib-kubelet-pods-a5d41034\x2d9ac4\x2d4f77\x2db122\x2d2b32312efd8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvzm5z.mount: Deactivated successfully. Mar 17 17:56:11.562495 systemd[1]: var-lib-kubelet-pods-f6540c1c\x2deef8\x2d4eb8\x2dab0d\x2db36d01e78ac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhwqf.mount: Deactivated successfully. Mar 17 17:56:11.562550 systemd[1]: var-lib-kubelet-pods-f6540c1c\x2deef8\x2d4eb8\x2dab0d\x2db36d01e78ac5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:56:11.562602 systemd[1]: var-lib-kubelet-pods-f6540c1c\x2deef8\x2d4eb8\x2dab0d\x2db36d01e78ac5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:56:11.827203 kubelet[3440]: I0317 17:56:11.827057 3440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d41034-9ac4-4f77-b122-2b32312efd8e" path="/var/lib/kubelet/pods/a5d41034-9ac4-4f77-b122-2b32312efd8e/volumes" Mar 17 17:56:11.827479 kubelet[3440]: I0317 17:56:11.827457 3440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" path="/var/lib/kubelet/pods/f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5/volumes" Mar 17 17:56:11.965064 kubelet[3440]: E0317 17:56:11.964991 3440 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:56:12.578776 sshd[5086]: Connection closed by 10.200.16.10 port 50648 Mar 17 17:56:12.578321 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:12.581662 systemd-logind[1736]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:56:12.582243 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:50648.service: Deactivated successfully. Mar 17 17:56:12.584433 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:56:12.584641 systemd[1]: session-27.scope: Consumed 2.137s CPU time, 23.6M memory peak. Mar 17 17:56:12.586529 systemd-logind[1736]: Removed session 27. Mar 17 17:56:12.672521 systemd[1]: Started sshd@26-10.200.20.14:22-10.200.16.10:48790.service - OpenSSH per-connection server daemon (10.200.16.10:48790). Mar 17 17:56:13.161835 sshd[5246]: Accepted publickey for core from 10.200.16.10 port 48790 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:13.163185 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:13.167466 systemd-logind[1736]: New session 28 of user core. Mar 17 17:56:13.178440 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:56:15.082025 kubelet[3440]: I0317 17:56:15.081950 3440 topology_manager.go:215] "Topology Admit Handler" podUID="65134381-7940-41a3-a28d-0fed819ff467" podNamespace="kube-system" podName="cilium-cwwv6" Mar 17 17:56:15.082025 kubelet[3440]: E0317 17:56:15.082018 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="clean-cilium-state" Mar 17 17:56:15.082025 kubelet[3440]: E0317 17:56:15.082029 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="cilium-agent" Mar 17 17:56:15.082025 kubelet[3440]: E0317 17:56:15.082037 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="mount-cgroup" Mar 17 17:56:15.087877 kubelet[3440]: E0317 17:56:15.082043 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="apply-sysctl-overwrites" Mar 17 17:56:15.087877 kubelet[3440]: E0317 17:56:15.082050 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="mount-bpf-fs" Mar 17 17:56:15.087877 kubelet[3440]: E0317 17:56:15.082056 3440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5d41034-9ac4-4f77-b122-2b32312efd8e" containerName="cilium-operator" Mar 17 17:56:15.087877 kubelet[3440]: I0317 17:56:15.082079 3440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6540c1c-eef8-4eb8-ab0d-b36d01e78ac5" containerName="cilium-agent" Mar 17 17:56:15.087877 kubelet[3440]: I0317 17:56:15.082086 3440 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d41034-9ac4-4f77-b122-2b32312efd8e" containerName="cilium-operator" Mar 17 17:56:15.090826 systemd[1]: Created slice kubepods-burstable-pod65134381_7940_41a3_a28d_0fed819ff467.slice - libcontainer container kubepods-burstable-pod65134381_7940_41a3_a28d_0fed819ff467.slice. Mar 17 17:56:15.098381 sshd[5248]: Connection closed by 10.200.16.10 port 48790 Mar 17 17:56:15.099571 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:15.102773 systemd[1]: sshd@26-10.200.20.14:22-10.200.16.10:48790.service: Deactivated successfully. Mar 17 17:56:15.106169 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:56:15.106471 systemd[1]: session-28.scope: Consumed 1.509s CPU time, 23.8M memory peak. Mar 17 17:56:15.107795 systemd-logind[1736]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:56:15.108854 systemd-logind[1736]: Removed session 28. Mar 17 17:56:15.179460 systemd[1]: Started sshd@27-10.200.20.14:22-10.200.16.10:48798.service - OpenSSH per-connection server daemon (10.200.16.10:48798). Mar 17 17:56:15.277715 kubelet[3440]: I0317 17:56:15.277660 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-cilium-run\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.277715 kubelet[3440]: I0317 17:56:15.277707 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-host-proc-sys-net\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277728 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgb6x\" (UniqueName: \"kubernetes.io/projected/65134381-7940-41a3-a28d-0fed819ff467-kube-api-access-bgb6x\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277746 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-xtables-lock\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277764 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65134381-7940-41a3-a28d-0fed819ff467-cilium-config-path\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277782 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-bpf-maps\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277796 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-cni-path\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278065 kubelet[3440]: I0317 17:56:15.277814 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-hostproc\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278213 kubelet[3440]: I0317 17:56:15.277830 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-etc-cni-netd\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278213 kubelet[3440]: I0317 17:56:15.277861 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65134381-7940-41a3-a28d-0fed819ff467-clustermesh-secrets\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278213 kubelet[3440]: I0317 17:56:15.277885 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-host-proc-sys-kernel\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278213 kubelet[3440]: I0317 17:56:15.277922 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-lib-modules\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278213 kubelet[3440]: I0317 17:56:15.277942 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/65134381-7940-41a3-a28d-0fed819ff467-cilium-ipsec-secrets\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278354 kubelet[3440]: I0317 17:56:15.277971 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65134381-7940-41a3-a28d-0fed819ff467-cilium-cgroup\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.278354 kubelet[3440]: I0317 17:56:15.278010 3440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65134381-7940-41a3-a28d-0fed819ff467-hubble-tls\") pod \"cilium-cwwv6\" (UID: \"65134381-7940-41a3-a28d-0fed819ff467\") " pod="kube-system/cilium-cwwv6" Mar 17 17:56:15.397017 containerd[1770]: time="2025-03-17T17:56:15.396452708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwwv6,Uid:65134381-7940-41a3-a28d-0fed819ff467,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:15.442454 containerd[1770]: time="2025-03-17T17:56:15.442371836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:15.442794 containerd[1770]: time="2025-03-17T17:56:15.442757237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:15.443018 containerd[1770]: time="2025-03-17T17:56:15.442989117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:15.443282 containerd[1770]: time="2025-03-17T17:56:15.443187278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:15.462448 systemd[1]: Started cri-containerd-dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356.scope - libcontainer container dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356. Mar 17 17:56:15.486898 containerd[1770]: time="2025-03-17T17:56:15.486838122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwwv6,Uid:65134381-7940-41a3-a28d-0fed819ff467,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\"" Mar 17 17:56:15.490655 containerd[1770]: time="2025-03-17T17:56:15.490574169Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:56:15.532642 containerd[1770]: time="2025-03-17T17:56:15.532589409Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b\"" Mar 17 17:56:15.534370 containerd[1770]: time="2025-03-17T17:56:15.533099050Z" level=info msg="StartContainer for \"5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b\"" Mar 17 17:56:15.557547 systemd[1]: Started cri-containerd-5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b.scope - libcontainer container 5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b. Mar 17 17:56:15.586303 containerd[1770]: time="2025-03-17T17:56:15.586241513Z" level=info msg="StartContainer for \"5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b\" returns successfully" Mar 17 17:56:15.590554 systemd[1]: cri-containerd-5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b.scope: Deactivated successfully. Mar 17 17:56:15.630316 sshd[5261]: Accepted publickey for core from 10.200.16.10 port 48798 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:15.630952 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:15.635989 systemd-logind[1736]: New session 29 of user core. Mar 17 17:56:15.643408 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:56:15.672499 containerd[1770]: time="2025-03-17T17:56:15.672172638Z" level=info msg="shim disconnected" id=5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b namespace=k8s.io Mar 17 17:56:15.672499 containerd[1770]: time="2025-03-17T17:56:15.672309638Z" level=warning msg="cleaning up after shim disconnected" id=5a2930718799add6e438c3b7e1a4f01872ac43a3ee0ab2e21d645f35e14a121b namespace=k8s.io Mar 17 17:56:15.672499 containerd[1770]: time="2025-03-17T17:56:15.672318078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:15.949508 sshd[5358]: Connection closed by 10.200.16.10 port 48798 Mar 17 17:56:15.949355 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:15.952378 systemd-logind[1736]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:56:15.952554 systemd[1]: sshd@27-10.200.20.14:22-10.200.16.10:48798.service: Deactivated successfully. Mar 17 17:56:15.954588 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:56:15.956620 systemd-logind[1736]: Removed session 29. Mar 17 17:56:16.034488 systemd[1]: Started sshd@28-10.200.20.14:22-10.200.16.10:48808.service - OpenSSH per-connection server daemon (10.200.16.10:48808). Mar 17 17:56:16.218181 kubelet[3440]: I0317 17:56:16.218020 3440 setters.go:580] "Node became not ready" node="ci-4230.1.0-a-3f2b416a0a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:56:16Z","lastTransitionTime":"2025-03-17T17:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:56:16.333341 containerd[1770]: time="2025-03-17T17:56:16.331980680Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:56:16.370131 containerd[1770]: time="2025-03-17T17:56:16.370079189Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff\"" Mar 17 17:56:16.371665 containerd[1770]: time="2025-03-17T17:56:16.370755630Z" level=info msg="StartContainer for \"64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff\"" Mar 17 17:56:16.401419 systemd[1]: Started cri-containerd-64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff.scope - libcontainer container 64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff. Mar 17 17:56:16.431576 containerd[1770]: time="2025-03-17T17:56:16.431526660Z" level=info msg="StartContainer for \"64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff\" returns successfully" Mar 17 17:56:16.432214 systemd[1]: cri-containerd-64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff.scope: Deactivated successfully. Mar 17 17:56:16.451379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff-rootfs.mount: Deactivated successfully. Mar 17 17:56:16.480532 containerd[1770]: time="2025-03-17T17:56:16.480056187Z" level=info msg="shim disconnected" id=64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff namespace=k8s.io Mar 17 17:56:16.480532 containerd[1770]: time="2025-03-17T17:56:16.480163547Z" level=warning msg="cleaning up after shim disconnected" id=64eaff112aa1d5ab6102eafad646c790c12778032ae4c56166c5eb2b25dabcff namespace=k8s.io Mar 17 17:56:16.480532 containerd[1770]: time="2025-03-17T17:56:16.480176467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:16.482002 sshd[5377]: Accepted publickey for core from 10.200.16.10 port 48808 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:16.484891 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:16.494585 systemd-logind[1736]: New session 30 of user core. Mar 17 17:56:16.497417 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:56:16.966108 kubelet[3440]: E0317 17:56:16.965930 3440 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:56:17.334881 containerd[1770]: time="2025-03-17T17:56:17.333886889Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:56:17.373318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987183479.mount: Deactivated successfully. Mar 17 17:56:17.385774 containerd[1770]: time="2025-03-17T17:56:17.385680023Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0\"" Mar 17 17:56:17.386385 containerd[1770]: time="2025-03-17T17:56:17.386206104Z" level=info msg="StartContainer for \"7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0\"" Mar 17 17:56:17.416444 systemd[1]: Started cri-containerd-7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0.scope - libcontainer container 7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0. Mar 17 17:56:17.444292 systemd[1]: cri-containerd-7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0.scope: Deactivated successfully. Mar 17 17:56:17.456010 containerd[1770]: time="2025-03-17T17:56:17.455667949Z" level=info msg="StartContainer for \"7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0\" returns successfully" Mar 17 17:56:17.473426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0-rootfs.mount: Deactivated successfully. Mar 17 17:56:17.499190 containerd[1770]: time="2025-03-17T17:56:17.499097627Z" level=info msg="shim disconnected" id=7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0 namespace=k8s.io Mar 17 17:56:17.499190 containerd[1770]: time="2025-03-17T17:56:17.499184748Z" level=warning msg="cleaning up after shim disconnected" id=7f762e9be40b8b212184a7531e4f961d6c6fc9e37e02cab2de536a723a58fee0 namespace=k8s.io Mar 17 17:56:17.499190 containerd[1770]: time="2025-03-17T17:56:17.499195508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:18.337638 containerd[1770]: time="2025-03-17T17:56:18.337506581Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:56:18.380723 containerd[1770]: time="2025-03-17T17:56:18.380635059Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747\"" Mar 17 17:56:18.381455 containerd[1770]: time="2025-03-17T17:56:18.381412221Z" level=info msg="StartContainer for \"d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747\"" Mar 17 17:56:18.415420 systemd[1]: Started cri-containerd-d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747.scope - libcontainer container d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747. Mar 17 17:56:18.438167 systemd[1]: cri-containerd-d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747.scope: Deactivated successfully. Mar 17 17:56:18.443242 containerd[1770]: time="2025-03-17T17:56:18.443201852Z" level=info msg="StartContainer for \"d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747\" returns successfully" Mar 17 17:56:18.460117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747-rootfs.mount: Deactivated successfully. Mar 17 17:56:18.476600 containerd[1770]: time="2025-03-17T17:56:18.476518872Z" level=info msg="shim disconnected" id=d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747 namespace=k8s.io Mar 17 17:56:18.477117 containerd[1770]: time="2025-03-17T17:56:18.476648233Z" level=warning msg="cleaning up after shim disconnected" id=d33c838d057602f791952e176feda84109cacff621440be52f1700dd0eb95747 namespace=k8s.io Mar 17 17:56:18.477117 containerd[1770]: time="2025-03-17T17:56:18.476659753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:19.342206 containerd[1770]: time="2025-03-17T17:56:19.341349714Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:56:19.386670 containerd[1770]: time="2025-03-17T17:56:19.386623236Z" level=info msg="CreateContainer within sandbox \"dd50a9e9bc8e420909321e8fe735b10cbee03f71c139847a2281738f23a84356\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378\"" Mar 17 17:56:19.388504 containerd[1770]: time="2025-03-17T17:56:19.387528397Z" level=info msg="StartContainer for \"117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378\"" Mar 17 17:56:19.416734 systemd[1]: run-containerd-runc-k8s.io-117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378-runc.r7lF9z.mount: Deactivated successfully. Mar 17 17:56:19.430575 systemd[1]: Started cri-containerd-117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378.scope - libcontainer container 117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378. Mar 17 17:56:19.459962 containerd[1770]: time="2025-03-17T17:56:19.459630568Z" level=info msg="StartContainer for \"117c81471f66fa3cb43e4680659f7ff9f4715535c19cedbe16f55ce55f95c378\" returns successfully" Mar 17 17:56:19.981292 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:56:20.361964 kubelet[3440]: I0317 17:56:20.361787 3440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cwwv6" podStartSLOduration=5.361769157 podStartE2EDuration="5.361769157s" podCreationTimestamp="2025-03-17 17:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:20.361753477 +0000 UTC m=+228.632905015" watchObservedRunningTime="2025-03-17 17:56:20.361769157 +0000 UTC m=+228.632920615" Mar 17 17:56:22.636023 systemd-networkd[1515]: lxc_health: Link UP Mar 17 17:56:22.647433 systemd-networkd[1515]: lxc_health: Gained carrier Mar 17 17:56:24.590395 systemd-networkd[1515]: lxc_health: Gained IPv6LL Mar 17 17:56:29.572431 sshd[5441]: Connection closed by 10.200.16.10 port 48808 Mar 17 17:56:29.573100 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:29.576977 systemd[1]: sshd@28-10.200.20.14:22-10.200.16.10:48808.service: Deactivated successfully. Mar 17 17:56:29.579100 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:56:29.580231 systemd-logind[1736]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:56:29.581195 systemd-logind[1736]: Removed session 30. Mar 17 17:56:31.854002 containerd[1770]: time="2025-03-17T17:56:31.853889154Z" level=info msg="StopPodSandbox for \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\"" Mar 17 17:56:31.854561 containerd[1770]: time="2025-03-17T17:56:31.854445395Z" level=info msg="TearDown network for sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" successfully" Mar 17 17:56:31.854561 containerd[1770]: time="2025-03-17T17:56:31.854467435Z" level=info msg="StopPodSandbox for \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" returns successfully" Mar 17 17:56:31.854936 containerd[1770]: time="2025-03-17T17:56:31.854898916Z" level=info msg="RemovePodSandbox for \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\"" Mar 17 17:56:31.854936 containerd[1770]: time="2025-03-17T17:56:31.854931116Z" level=info msg="Forcibly stopping sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\"" Mar 17 17:56:31.855061 containerd[1770]: time="2025-03-17T17:56:31.854984196Z" level=info msg="TearDown network for sandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" successfully" Mar 17 17:56:31.865730 containerd[1770]: time="2025-03-17T17:56:31.865683976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:31.865853 containerd[1770]: time="2025-03-17T17:56:31.865753136Z" level=info msg="RemovePodSandbox \"d0bc1c124b32a48e10062ef976c07b6c9f36852d225d9f327f77734eb65c4aeb\" returns successfully" Mar 17 17:56:31.866477 containerd[1770]: time="2025-03-17T17:56:31.866276897Z" level=info msg="StopPodSandbox for \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\"" Mar 17 17:56:31.866477 containerd[1770]: time="2025-03-17T17:56:31.866361538Z" level=info msg="TearDown network for sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" successfully" Mar 17 17:56:31.866477 containerd[1770]: time="2025-03-17T17:56:31.866370818Z" level=info msg="StopPodSandbox for \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" returns successfully" Mar 17 17:56:31.866928 containerd[1770]: time="2025-03-17T17:56:31.866798738Z" level=info msg="RemovePodSandbox for \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\"" Mar 17 17:56:31.866928 containerd[1770]: time="2025-03-17T17:56:31.866837458Z" level=info msg="Forcibly stopping sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\"" Mar 17 17:56:31.866928 containerd[1770]: time="2025-03-17T17:56:31.866886019Z" level=info msg="TearDown network for sandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" successfully" Mar 17 17:56:31.875149 containerd[1770]: time="2025-03-17T17:56:31.875098194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:56:31.875395 containerd[1770]: time="2025-03-17T17:56:31.875167234Z" level=info msg="RemovePodSandbox \"fa663085a39c98537acb13266f47bdec31a1df29a5bbb7edf7dd403fa49d735f\" returns successfully"