Mar 17 17:50:58.307989 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:50:58.308011 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:50:58.308020 kernel: KASLR enabled Mar 17 17:50:58.308025 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 17:50:58.308033 kernel: printk: bootconsole [pl11] enabled Mar 17 17:50:58.308038 kernel: efi: EFI v2.7 by EDK II Mar 17 17:50:58.308045 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Mar 17 17:50:58.308051 kernel: random: crng init done Mar 17 17:50:58.308057 kernel: secureboot: Secure boot disabled Mar 17 17:50:58.308063 kernel: ACPI: Early table checksum verification disabled Mar 17 17:50:58.308069 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 17:50:58.308074 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308080 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308088 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 17:50:58.308095 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308102 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308108 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308115 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308121 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308127 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308134 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 17:50:58.308140 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:50:58.308146 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 17:50:58.308152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 17 17:50:58.308158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 17 17:50:58.308164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 17 17:50:58.308170 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 17 17:50:58.308176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 17 17:50:58.308183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 17 17:50:58.308189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 17 17:50:58.308195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 17 17:50:58.308202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 17 17:50:58.308208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 17 17:50:58.308214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 17 17:50:58.308220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 17 17:50:58.308226 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 17 17:50:58.308232 kernel: Zone ranges: Mar 17 17:50:58.308238 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 17:50:58.308244 kernel: DMA32 empty Mar 17 17:50:58.308250 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:50:58.308260 kernel: Movable zone start for each node Mar 17 17:50:58.308267 kernel: Early memory node ranges Mar 17 17:50:58.308273 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 17:50:58.308280 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Mar 17 17:50:58.308299 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Mar 17 17:50:58.308308 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Mar 17 17:50:58.308314 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 17:50:58.308321 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 17:50:58.308327 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 17:50:58.308333 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 17:50:58.308340 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:50:58.308347 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 17:50:58.308353 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 17:50:58.308359 kernel: psci: probing for conduit method from ACPI. Mar 17 17:50:58.308366 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:50:58.308372 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:50:58.308379 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 17:50:58.308387 kernel: psci: SMC Calling Convention v1.4 Mar 17 17:50:58.308393 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 17 17:50:58.308399 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 17 17:50:58.308406 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:50:58.308412 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:50:58.308419 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:50:58.308425 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:50:58.308432 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:50:58.308438 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:50:58.308445 kernel: CPU features: detected: Spectre-BHB Mar 17 17:50:58.308451 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:50:58.308459 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:50:58.308466 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:50:58.308472 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 17:50:58.308479 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:50:58.308485 kernel: alternatives: applying boot alternatives Mar 17 17:50:58.308493 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:50:58.308500 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:50:58.308506 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:50:58.308513 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:50:58.308519 kernel: Fallback order for Node 0: 0 Mar 17 17:50:58.308526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 17:50:58.308533 kernel: Policy zone: Normal Mar 17 17:50:58.308540 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:50:58.308546 kernel: software IO TLB: area num 2. Mar 17 17:50:58.308553 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Mar 17 17:50:58.308559 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Mar 17 17:50:58.308566 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:50:58.308572 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:50:58.308579 kernel: rcu: RCU event tracing is enabled. Mar 17 17:50:58.308586 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:50:58.308592 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:50:58.308599 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:50:58.308607 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:50:58.308614 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:50:58.308621 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:50:58.308627 kernel: GICv3: 960 SPIs implemented Mar 17 17:50:58.308633 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:50:58.308640 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:50:58.308646 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:50:58.308653 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 17:50:58.308659 kernel: ITS: No ITS available, not enabling LPIs Mar 17 17:50:58.308666 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:50:58.308672 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:50:58.308679 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:50:58.308687 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:50:58.308693 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:50:58.308700 kernel: Console: colour dummy device 80x25 Mar 17 17:50:58.308707 kernel: printk: console [tty1] enabled Mar 17 17:50:58.308714 kernel: ACPI: Core revision 20230628 Mar 17 17:50:58.308720 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:50:58.308727 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:50:58.308734 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:50:58.308740 kernel: landlock: Up and running. Mar 17 17:50:58.308748 kernel: SELinux: Initializing. Mar 17 17:50:58.308755 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.308761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.308768 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:50:58.308775 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:50:58.308781 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 17:50:58.308788 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 17:50:58.308801 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:50:58.308808 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:50:58.308815 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:50:58.308822 kernel: Remapping and enabling EFI services. Mar 17 17:50:58.308829 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:50:58.308837 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:50:58.308844 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 17:50:58.308851 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:50:58.308858 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:50:58.308865 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:50:58.308874 kernel: SMP: Total of 2 processors activated. Mar 17 17:50:58.308881 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:50:58.308888 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 17:50:58.308895 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:50:58.308902 kernel: CPU features: detected: CRC32 instructions Mar 17 17:50:58.308909 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:50:58.308916 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:50:58.308923 kernel: CPU features: detected: Privileged Access Never Mar 17 17:50:58.308930 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:50:58.308938 kernel: alternatives: applying system-wide alternatives Mar 17 17:50:58.308945 kernel: devtmpfs: initialized Mar 17 17:50:58.308952 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:50:58.308960 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:50:58.308967 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:50:58.308974 kernel: SMBIOS 3.1.0 present. Mar 17 17:50:58.308981 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 17:50:58.308993 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:50:58.309000 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:50:58.309009 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:50:58.309016 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:50:58.309023 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:50:58.309030 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 17 17:50:58.309037 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:50:58.309044 kernel: cpuidle: using governor menu Mar 17 17:50:58.309051 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:50:58.309058 kernel: ASID allocator initialised with 32768 entries Mar 17 17:50:58.309066 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:50:58.309074 kernel: Serial: AMBA PL011 UART driver Mar 17 17:50:58.309081 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:50:58.309088 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:50:58.309095 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:50:58.309103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:50:58.309110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:50:58.309117 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:50:58.309124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:50:58.309131 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:50:58.309139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:50:58.309146 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:50:58.309153 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:50:58.309160 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:50:58.309167 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:50:58.309174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:50:58.309181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:50:58.309188 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:50:58.309195 kernel: ACPI: Interpreter enabled Mar 17 17:50:58.309204 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:50:58.309211 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:50:58.309218 kernel: printk: console [ttyAMA0] enabled Mar 17 17:50:58.309225 kernel: printk: bootconsole [pl11] disabled Mar 17 17:50:58.309232 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 17:50:58.309239 kernel: iommu: Default domain type: Translated Mar 17 17:50:58.309246 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:50:58.309253 kernel: efivars: Registered efivars operations Mar 17 17:50:58.309260 kernel: vgaarb: loaded Mar 17 17:50:58.309268 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:50:58.309275 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:50:58.312971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:50:58.312988 kernel: pnp: PnP ACPI init Mar 17 17:50:58.312996 kernel: pnp: PnP ACPI: found 0 devices Mar 17 17:50:58.313003 kernel: NET: Registered PF_INET protocol family Mar 17 17:50:58.313010 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:50:58.313018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:50:58.313025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:50:58.313037 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:50:58.313044 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:50:58.313052 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:50:58.313059 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.313066 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:50:58.313073 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:50:58.313080 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:50:58.313087 kernel: kvm [1]: HYP mode not available Mar 17 17:50:58.313094 kernel: Initialise system trusted keyrings Mar 17 17:50:58.313103 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:50:58.313110 kernel: Key type asymmetric registered Mar 17 17:50:58.313117 kernel: Asymmetric key parser 'x509' registered Mar 17 17:50:58.313124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:50:58.313132 kernel: io scheduler mq-deadline registered Mar 17 17:50:58.313139 kernel: io scheduler kyber registered Mar 17 17:50:58.313146 kernel: io scheduler bfq registered Mar 17 17:50:58.313153 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:50:58.313160 kernel: thunder_xcv, ver 1.0 Mar 17 17:50:58.313168 kernel: thunder_bgx, ver 1.0 Mar 17 17:50:58.313176 kernel: nicpf, ver 1.0 Mar 17 17:50:58.313182 kernel: nicvf, ver 1.0 Mar 17 17:50:58.313344 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:50:58.313421 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:50:57 UTC (1742233857) Mar 17 17:50:58.313431 kernel: efifb: probing for efifb Mar 17 17:50:58.313439 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:50:58.313446 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:50:58.313455 kernel: efifb: scrolling: redraw Mar 17 17:50:58.313462 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:50:58.313480 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:50:58.313488 kernel: fb0: EFI VGA frame buffer device Mar 17 17:50:58.313495 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 17:50:58.313502 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:50:58.313509 kernel: No ACPI PMU IRQ for CPU0 Mar 17 17:50:58.313516 kernel: No ACPI PMU IRQ for CPU1 Mar 17 17:50:58.313523 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 17:50:58.313532 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:50:58.313539 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:50:58.313546 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:50:58.313553 kernel: Segment Routing with IPv6 Mar 17 17:50:58.313560 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:50:58.313567 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:50:58.313574 kernel: Key type dns_resolver registered Mar 17 17:50:58.313581 kernel: registered taskstats version 1 Mar 17 17:50:58.313588 kernel: Loading compiled-in X.509 certificates Mar 17 17:50:58.313596 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:50:58.313603 kernel: Key type .fscrypt registered Mar 17 17:50:58.313610 kernel: Key type fscrypt-provisioning registered Mar 17 17:50:58.313617 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:50:58.313624 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:50:58.313631 kernel: ima: No architecture policies found Mar 17 17:50:58.313638 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:50:58.313645 kernel: clk: Disabling unused clocks Mar 17 17:50:58.313652 kernel: Freeing unused kernel memory: 38336K Mar 17 17:50:58.313661 kernel: Run /init as init process Mar 17 17:50:58.313668 kernel: with arguments: Mar 17 17:50:58.313675 kernel: /init Mar 17 17:50:58.313681 kernel: with environment: Mar 17 17:50:58.313688 kernel: HOME=/ Mar 17 17:50:58.313695 kernel: TERM=linux Mar 17 17:50:58.313701 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:50:58.313710 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:50:58.313721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:50:58.313729 systemd[1]: Detected virtualization microsoft. Mar 17 17:50:58.313736 systemd[1]: Detected architecture arm64. Mar 17 17:50:58.313743 systemd[1]: Running in initrd. Mar 17 17:50:58.313751 systemd[1]: No hostname configured, using default hostname. Mar 17 17:50:58.313758 systemd[1]: Hostname set to <localhost>. Mar 17 17:50:58.313766 systemd[1]: Initializing machine ID from random generator. Mar 17 17:50:58.313773 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:50:58.313782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:50:58.313790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:50:58.313799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:50:58.313806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:50:58.313814 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:50:58.313822 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:50:58.313837 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:50:58.313847 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:50:58.313855 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:50:58.313863 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:50:58.313870 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:50:58.313878 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:50:58.313885 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:50:58.313893 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:50:58.313900 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:50:58.313909 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:50:58.313917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:50:58.313924 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:50:58.313932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:50:58.313940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:50:58.313948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:50:58.313955 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:50:58.313963 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:50:58.313971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:50:58.313980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:50:58.313987 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:50:58.313995 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:50:58.314002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:50:58.314027 systemd-journald[218]: Collecting audit messages is disabled. Mar 17 17:50:58.314048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:58.314057 systemd-journald[218]: Journal started Mar 17 17:50:58.314075 systemd-journald[218]: Runtime Journal (/run/log/journal/184f4d88556446f2bf79ce89c0c08180) is 8M, max 78.5M, 70.5M free. Mar 17 17:50:58.309416 systemd-modules-load[220]: Inserted module 'overlay' Mar 17 17:50:58.334002 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:50:58.334712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:50:58.361567 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:50:58.361590 kernel: Bridge firewalling registered Mar 17 17:50:58.355859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:50:58.360786 systemd-modules-load[220]: Inserted module 'br_netfilter' Mar 17 17:50:58.368591 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:50:58.379783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:50:58.390560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:58.412594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:58.429513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:50:58.444934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:50:58.462523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:50:58.472627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:50:58.485211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:50:58.491512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:50:58.506001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:50:58.536590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:50:58.550459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:50:58.566442 dracut-cmdline[252]: dracut-dracut-053 Mar 17 17:50:58.595212 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:50:58.570506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:50:58.587532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:50:58.654391 systemd-resolved[257]: Positive Trust Anchors: Mar 17 17:50:58.654403 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:50:58.654434 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:50:58.660901 systemd-resolved[257]: Defaulting to hostname 'linux'. Mar 17 17:50:58.661774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:50:58.676834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:50:58.741309 kernel: SCSI subsystem initialized Mar 17 17:50:58.749300 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:50:58.760324 kernel: iscsi: registered transport (tcp) Mar 17 17:50:58.776322 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:50:58.776357 kernel: QLogic iSCSI HBA Driver Mar 17 17:50:58.813778 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:50:58.834508 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:50:58.866133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:50:58.866188 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:50:58.872385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:50:58.920313 kernel: raid6: neonx8 gen() 15758 MB/s Mar 17 17:50:58.940310 kernel: raid6: neonx4 gen() 15777 MB/s Mar 17 17:50:58.960295 kernel: raid6: neonx2 gen() 13281 MB/s Mar 17 17:50:58.981295 kernel: raid6: neonx1 gen() 10422 MB/s Mar 17 17:50:59.001293 kernel: raid6: int64x8 gen() 6789 MB/s Mar 17 17:50:59.021297 kernel: raid6: int64x4 gen() 7352 MB/s Mar 17 17:50:59.042329 kernel: raid6: int64x2 gen() 6114 MB/s Mar 17 17:50:59.065951 kernel: raid6: int64x1 gen() 5061 MB/s Mar 17 17:50:59.065977 kernel: raid6: using algorithm neonx4 gen() 15777 MB/s Mar 17 17:50:59.089605 kernel: raid6: .... xor() 12363 MB/s, rmw enabled Mar 17 17:50:59.089632 kernel: raid6: using neon recovery algorithm Mar 17 17:50:59.101574 kernel: xor: measuring software checksum speed Mar 17 17:50:59.101605 kernel: 8regs : 21584 MB/sec Mar 17 17:50:59.104941 kernel: 32regs : 21670 MB/sec Mar 17 17:50:59.108431 kernel: arm64_neon : 28003 MB/sec Mar 17 17:50:59.112610 kernel: xor: using function: arm64_neon (28003 MB/sec) Mar 17 17:50:59.162343 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:50:59.172158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:50:59.188449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:50:59.211855 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 17 17:50:59.217440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:50:59.235394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:50:59.258056 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Mar 17 17:50:59.290227 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:50:59.304582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:50:59.344788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:50:59.364819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:50:59.391270 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:50:59.407996 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:50:59.422347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:50:59.447315 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 17:50:59.438568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:50:59.471306 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:50:59.471359 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:50:59.471383 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:50:59.477822 kernel: scsi host0: storvsc_host_t Mar 17 17:50:59.477991 kernel: scsi host1: storvsc_host_t Mar 17 17:50:59.490319 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:50:59.490382 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Mar 17 17:50:59.484854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:50:59.538604 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:50:59.538625 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:50:59.538635 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:50:59.538645 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:50:59.538654 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:50:59.522008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:50:59.558393 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:50:59.522155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:50:59.558468 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:59.564937 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:50:59.565178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:59.585465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:59.632501 kernel: PTP clock support registered Mar 17 17:50:59.606587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:59.670650 kernel: hv_netvsc 00224878-8a1e-0022-4878-8a1e00224878 eth0: VF slot 1 added Mar 17 17:50:59.670847 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:50:59.670859 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:50:59.621313 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:50:59.705503 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:50:59.705525 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:50:59.948289 kernel: hv_pci edf7ccd4-8856-409d-9385-24c9e596f9be: PCI VMBus probing: Using version 0x10004 Mar 17 17:51:00.003932 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:51:00.003949 kernel: hv_pci edf7ccd4-8856-409d-9385-24c9e596f9be: PCI host bridge to bus 8856:00 Mar 17 17:51:00.004058 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:51:00.004068 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:51:00.004078 kernel: pci_bus 8856:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 17:51:00.004171 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:51:00.004181 kernel: pci_bus 8856:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:51:00.004258 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:51:00.004354 kernel: pci 8856:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 17:51:00.004447 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:51:00.004572 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:51:00.004662 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:51:00.004743 kernel: pci 8856:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:51:00.004832 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:51:00.004916 kernel: pci 8856:00:02.0: enabling Extended Tags Mar 17 17:51:00.004997 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:51:00.005078 kernel: pci 8856:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8856:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 17:51:00.005159 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:00.005171 kernel: pci_bus 8856:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:51:00.005248 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:51:00.005330 kernel: pci 8856:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:50:59.662558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:59.705556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:50:59.919857 systemd-resolved[257]: Clock change detected. Flushing caches. Mar 17 17:51:00.030277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:00.064369 kernel: mlx5_core 8856:00:02.0: enabling device (0000 -> 0002) Mar 17 17:51:00.282927 kernel: mlx5_core 8856:00:02.0: firmware version: 16.30.1284 Mar 17 17:51:00.283056 kernel: hv_netvsc 00224878-8a1e-0022-4878-8a1e00224878 eth0: VF registering: eth1 Mar 17 17:51:00.283147 kernel: mlx5_core 8856:00:02.0 eth1: joined to eth0 Mar 17 17:51:00.283246 kernel: mlx5_core 8856:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 17 17:51:00.292493 kernel: mlx5_core 8856:00:02.0 enP34902s1: renamed from eth1 Mar 17 17:51:00.522585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:51:00.636548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:51:00.723497 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (486) Mar 17 17:51:00.740045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:51:00.778534 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (485) Mar 17 17:51:00.793656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:51:00.800548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:51:00.831696 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:51:00.855062 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:00.862494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:01.875090 disk-uuid[600]: The operation has completed successfully. Mar 17 17:51:01.882849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:51:01.935394 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:51:01.935493 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:51:01.985639 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:51:01.998316 sh[686]: Success Mar 17 17:51:02.028515 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:51:02.241957 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:51:02.265167 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:51:02.274078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:51:02.300610 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:51:02.300645 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:02.307253 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:51:02.314278 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:51:02.318411 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:51:02.642250 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:51:02.647515 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:51:02.664687 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:51:02.672735 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:51:02.710684 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:02.710745 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:02.715243 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:02.738546 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:02.750857 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:51:02.759853 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:02.769153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:51:02.787681 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:51:02.795034 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:02.815629 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:02.858787 systemd-networkd[871]: lo: Link UP Mar 17 17:51:02.858794 systemd-networkd[871]: lo: Gained carrier Mar 17 17:51:02.861419 systemd-networkd[871]: Enumeration completed Mar 17 17:51:02.861580 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:02.863640 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:02.863644 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:02.873562 systemd[1]: Reached target network.target - Network. Mar 17 17:51:02.932493 kernel: mlx5_core 8856:00:02.0 enP34902s1: Link up Mar 17 17:51:02.972498 kernel: hv_netvsc 00224878-8a1e-0022-4878-8a1e00224878 eth0: Data path switched to VF: enP34902s1 Mar 17 17:51:02.972642 systemd-networkd[871]: enP34902s1: Link UP Mar 17 17:51:02.972868 systemd-networkd[871]: eth0: Link UP Mar 17 17:51:02.973222 systemd-networkd[871]: eth0: Gained carrier Mar 17 17:51:02.973231 systemd-networkd[871]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:02.995717 systemd-networkd[871]: enP34902s1: Gained carrier Mar 17 17:51:03.007514 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:03.560605 ignition[866]: Ignition 2.20.0 Mar 17 17:51:03.560616 ignition[866]: Stage: fetch-offline Mar 17 17:51:03.565321 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:03.560648 ignition[866]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.560656 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.560739 ignition[866]: parsed url from cmdline: "" Mar 17 17:51:03.560743 ignition[866]: no config URL provided Mar 17 17:51:03.560747 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:03.593738 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:51:03.560754 ignition[866]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:03.560759 ignition[866]: failed to fetch config: resource requires networking Mar 17 17:51:03.560925 ignition[866]: Ignition finished successfully Mar 17 17:51:03.613129 ignition[880]: Ignition 2.20.0 Mar 17 17:51:03.613135 ignition[880]: Stage: fetch Mar 17 17:51:03.613298 ignition[880]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.613307 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.613389 ignition[880]: parsed url from cmdline: "" Mar 17 17:51:03.613393 ignition[880]: no config URL provided Mar 17 17:51:03.613397 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:03.613404 ignition[880]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:03.613427 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:51:03.706827 ignition[880]: GET result: OK Mar 17 17:51:03.706916 ignition[880]: config has been read from IMDS userdata Mar 17 17:51:03.707015 ignition[880]: parsing config with SHA512: 08b94fd0f3a2a00ae8726bfd9eb025efe7e0beaf00547660b53df2ae852ca82e98927ddff8e09bd10f7cdeffa009a61ad2db933391546142b5345aaeca57d5cd Mar 17 17:51:03.710984 unknown[880]: fetched base config from "system" Mar 17 17:51:03.714216 ignition[880]: fetch: fetch complete Mar 17 17:51:03.710991 unknown[880]: fetched base config from "system" Mar 17 17:51:03.714222 ignition[880]: fetch: fetch passed Mar 17 17:51:03.710996 unknown[880]: fetched user config from "azure" Mar 17 17:51:03.714281 ignition[880]: Ignition finished successfully Mar 17 17:51:03.716262 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:51:03.741188 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:51:03.767249 ignition[887]: Ignition 2.20.0 Mar 17 17:51:03.767260 ignition[887]: Stage: kargs Mar 17 17:51:03.767417 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.774350 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:51:03.767427 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.768280 ignition[887]: kargs: kargs passed Mar 17 17:51:03.768318 ignition[887]: Ignition finished successfully Mar 17 17:51:03.801764 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:51:03.822193 ignition[893]: Ignition 2.20.0 Mar 17 17:51:03.822203 ignition[893]: Stage: disks Mar 17 17:51:03.826759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:51:03.822369 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:03.835221 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:03.822378 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:03.846431 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:51:03.823417 ignition[893]: disks: disks passed Mar 17 17:51:03.858377 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:03.823467 ignition[893]: Ignition finished successfully Mar 17 17:51:03.870241 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:03.881789 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:03.908768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:51:03.983070 systemd-fsck[902]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:51:03.992189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:51:04.008659 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:51:04.071404 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:51:04.066968 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:51:04.073020 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:04.119592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:04.127598 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:51:04.138642 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:51:04.156827 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:51:04.156870 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:04.191032 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:51:04.208529 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (913) Mar 17 17:51:04.208553 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:04.216483 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:04.221212 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:04.227658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:51:04.234242 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:04.240974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:04.413625 systemd-networkd[871]: eth0: Gained IPv6LL Mar 17 17:51:04.605760 systemd-networkd[871]: enP34902s1: Gained IPv6LL Mar 17 17:51:04.817396 coreos-metadata[915]: Mar 17 17:51:04.817 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:51:04.825571 coreos-metadata[915]: Mar 17 17:51:04.825 INFO Fetch successful Mar 17 17:51:04.825571 coreos-metadata[915]: Mar 17 17:51:04.825 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:51:04.841730 coreos-metadata[915]: Mar 17 17:51:04.841 INFO Fetch successful Mar 17 17:51:04.859575 coreos-metadata[915]: Mar 17 17:51:04.859 INFO wrote hostname ci-4230.1.0-a-76d88708f5 to /sysroot/etc/hostname Mar 17 17:51:04.869147 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:05.066264 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:51:05.094140 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:51:05.101964 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:51:05.126661 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:51:05.816720 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:05.838685 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:51:05.851177 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:51:05.873444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:51:05.881498 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:05.892745 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:51:05.909533 ignition[1036]: INFO : Ignition 2.20.0 Mar 17 17:51:05.909533 ignition[1036]: INFO : Stage: mount Mar 17 17:51:05.917398 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:05.917398 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:05.917398 ignition[1036]: INFO : mount: mount passed Mar 17 17:51:05.917398 ignition[1036]: INFO : Ignition finished successfully Mar 17 17:51:05.914561 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:51:05.937571 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:51:05.954697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:51:05.989467 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Mar 17 17:51:05.989520 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:51:05.995145 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:51:05.999881 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:51:06.006490 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:51:06.008317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:51:06.036518 ignition[1062]: INFO : Ignition 2.20.0 Mar 17 17:51:06.036518 ignition[1062]: INFO : Stage: files Mar 17 17:51:06.036518 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:06.036518 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:06.056280 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:51:06.081559 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:51:06.081559 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:51:06.218149 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:51:06.225553 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:51:06.225553 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:51:06.218677 unknown[1062]: wrote ssh authorized keys file for user: core Mar 17 17:51:06.245691 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:51:06.245691 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:51:06.323170 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:51:06.448909 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:51:06.448909 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:51:06.448909 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:51:06.762441 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:51:06.831754 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:51:06.841555 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 17:51:07.244516 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:51:07.453763 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:51:07.453763 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:51:07.471913 ignition[1062]: INFO : files: files passed Mar 17 17:51:07.471913 ignition[1062]: INFO : Ignition finished successfully Mar 17 17:51:07.477801 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:51:07.520786 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:51:07.539665 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:51:07.564381 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:51:07.566553 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:51:07.599105 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:07.607176 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:07.607176 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:51:07.599377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:07.614226 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:51:07.649759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:51:07.682640 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:51:07.684829 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:51:07.694817 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:51:07.706546 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:51:07.717373 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:51:07.732743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:51:07.755671 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:07.770699 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:51:07.788067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:51:07.788169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:51:07.800966 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:07.812272 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:07.824129 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:51:07.834564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:51:07.834645 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:51:07.850233 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:51:07.861911 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:51:07.871442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:51:07.881396 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:51:07.893271 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:51:07.905423 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:51:07.916779 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:51:07.929240 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:51:07.941176 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:51:07.952347 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:51:07.962138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:51:07.962206 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:51:07.977701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:07.989063 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:08.003002 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:51:08.006500 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:08.016331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:51:08.016399 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:51:08.033964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:51:08.034013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:51:08.040876 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:51:08.040919 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:51:08.051278 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:51:08.051325 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:51:08.117443 ignition[1116]: INFO : Ignition 2.20.0 Mar 17 17:51:08.117443 ignition[1116]: INFO : Stage: umount Mar 17 17:51:08.117443 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:08.117443 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:51:08.117443 ignition[1116]: INFO : umount: umount passed Mar 17 17:51:08.117443 ignition[1116]: INFO : Ignition finished successfully Mar 17 17:51:08.083645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:51:08.100572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:51:08.100657 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:08.111595 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:51:08.134247 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:51:08.134329 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:08.146425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:51:08.146489 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:51:08.165975 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:51:08.167931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:51:08.179644 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:51:08.180433 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:51:08.180540 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:51:08.198824 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:51:08.198902 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:51:08.213336 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:51:08.213390 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:51:08.233853 systemd[1]: Stopped target network.target - Network. Mar 17 17:51:08.248282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:51:08.248349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:51:08.262358 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:51:08.279275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:51:08.289512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:08.297889 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:51:08.310200 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:51:08.322207 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:51:08.322254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:51:08.333360 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:51:08.333395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:51:08.343964 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:51:08.344014 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:51:08.354167 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:51:08.354205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:51:08.366176 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:51:08.379967 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:51:08.402932 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:51:08.403072 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:51:08.424899 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:51:08.425160 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:51:08.425362 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:51:08.440407 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:51:08.440949 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:51:08.441017 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:08.466649 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:51:08.473141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:51:08.473225 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:08.485162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:51:08.658016 kernel: hv_netvsc 00224878-8a1e-0022-4878-8a1e00224878 eth0: Data path switched from VF: enP34902s1 Mar 17 17:51:08.485211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:08.501062 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:51:08.501107 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:08.509071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:51:08.509113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:08.527608 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:08.548977 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:51:08.549047 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:08.577842 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:51:08.578058 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:08.592307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:51:08.592376 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:08.604326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:51:08.604375 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:08.622013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:51:08.622084 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:51:08.640961 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:51:08.641011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:51:08.657870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:51:08.657917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:08.688687 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:51:08.695823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:51:08.695886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:08.710731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:08.710793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:08.725390 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:51:08.725448 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:08.725734 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:51:08.725843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:51:08.736785 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:51:08.736865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:51:08.750893 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:51:08.750986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:51:08.776005 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:51:08.776118 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:51:08.949116 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 17 17:51:08.786658 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:51:08.815720 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:51:08.832498 systemd[1]: Switching root. Mar 17 17:51:08.965595 systemd-journald[218]: Journal stopped Mar 17 17:51:13.557725 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:51:13.557747 kernel: SELinux: policy capability open_perms=1 Mar 17 17:51:13.557757 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:51:13.557765 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:51:13.557774 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:51:13.557782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:51:13.557791 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:51:13.557798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:51:13.557806 kernel: audit: type=1403 audit(1742233869.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:51:13.557816 systemd[1]: Successfully loaded SELinux policy in 148.348ms. Mar 17 17:51:13.557827 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.049ms. Mar 17 17:51:13.557837 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:51:13.557846 systemd[1]: Detected virtualization microsoft. Mar 17 17:51:13.557854 systemd[1]: Detected architecture arm64. Mar 17 17:51:13.557863 systemd[1]: Detected first boot. Mar 17 17:51:13.557873 systemd[1]: Hostname set to <ci-4230.1.0-a-76d88708f5>. Mar 17 17:51:13.557882 systemd[1]: Initializing machine ID from random generator. Mar 17 17:51:13.557890 zram_generator::config[1160]: No configuration found. Mar 17 17:51:13.557899 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:51:13.557907 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:51:13.557918 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:51:13.557926 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:51:13.557937 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:51:13.557945 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:13.557954 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:51:13.557964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:51:13.557973 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:51:13.557981 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:51:13.557990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:51:13.558001 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:51:13.558010 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:51:13.558018 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:51:13.558027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:13.558036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:13.558045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:51:13.558054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:51:13.558063 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:51:13.558073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:51:13.558083 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:51:13.558091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:13.558103 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:51:13.558113 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:51:13.558122 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:51:13.558131 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:51:13.558140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:13.558151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:51:13.558160 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:51:13.558168 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:51:13.558177 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:51:13.558186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:51:13.558195 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:51:13.558207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:13.558216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:13.558225 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:13.558234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:51:13.558243 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:51:13.558252 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:51:13.558261 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:51:13.558272 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:51:13.558282 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:51:13.558291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:51:13.558300 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:51:13.558310 systemd[1]: Reached target machines.target - Containers. Mar 17 17:51:13.558319 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:51:13.558329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:13.558338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:51:13.558349 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:51:13.558358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:13.558367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:13.558376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:13.558385 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:51:13.558395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:13.558404 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:51:13.558414 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:51:13.558424 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:51:13.558434 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:51:13.558443 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:51:13.558453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:13.558462 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:51:13.558471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:51:13.558489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:51:13.558498 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:51:13.558508 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:51:13.558519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:51:13.558529 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:51:13.558538 systemd[1]: Stopped verity-setup.service. Mar 17 17:51:13.558547 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:51:13.558556 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:51:13.558565 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:51:13.558574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:51:13.558584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:51:13.558594 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:51:13.558621 systemd-journald[1243]: Collecting audit messages is disabled. Mar 17 17:51:13.558640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:13.558650 systemd-journald[1243]: Journal started Mar 17 17:51:13.558671 systemd-journald[1243]: Runtime Journal (/run/log/journal/7a645e7e8382492bbb5c9b5ef04937c5) is 8M, max 78.5M, 70.5M free. Mar 17 17:51:12.200538 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:51:12.211168 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:51:12.211539 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:51:12.211832 systemd[1]: systemd-journald.service: Consumed 3.259s CPU time. Mar 17 17:51:13.579375 kernel: loop: module loaded Mar 17 17:51:13.579448 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:51:13.586052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:13.587525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:13.594689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:13.594862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:13.602088 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:13.602252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:13.612925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:13.620510 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:51:13.628030 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:51:13.643321 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:51:13.649830 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:51:13.649863 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:51:13.656396 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:51:13.671631 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:51:13.680681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:51:13.689312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:13.935515 kernel: fuse: init (API version 7.39) Mar 17 17:51:13.935669 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:51:13.942679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:51:13.954493 kernel: ACPI: bus type drm_connector registered Mar 17 17:51:13.953969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:13.956004 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:51:13.962187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:13.963274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:13.970662 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:51:13.981784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:51:13.982045 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:51:13.989074 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:13.989235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:13.995097 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:51:13.995252 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:51:14.001594 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:51:14.009041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:14.015909 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:51:14.030608 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:51:14.037591 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:51:14.045731 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:51:14.054751 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:51:14.071660 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:51:14.308006 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:51:14.318946 systemd-journald[1243]: Time spent on flushing to /var/log/journal/7a645e7e8382492bbb5c9b5ef04937c5 is 12.110ms for 914 entries. Mar 17 17:51:14.318946 systemd-journald[1243]: System Journal (/var/log/journal/7a645e7e8382492bbb5c9b5ef04937c5) is 8M, max 2.6G, 2.6G free. Mar 17 17:51:14.349244 systemd-journald[1243]: Received client request to flush runtime journal. Mar 17 17:51:14.327862 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:51:14.336310 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:51:14.343356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:51:14.352511 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:51:14.363613 kernel: loop0: detected capacity change from 0 to 123192 Mar 17 17:51:14.364226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:14.374883 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:51:14.386685 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:51:15.213827 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:51:15.214451 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:51:16.021569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:51:16.033624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:51:16.394891 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 17 17:51:16.394906 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 17 17:51:16.399635 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:16.578509 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:51:16.830498 kernel: loop1: detected capacity change from 0 to 28720 Mar 17 17:51:18.875498 kernel: loop2: detected capacity change from 0 to 113512 Mar 17 17:51:20.251156 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:51:20.268633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:20.291079 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Mar 17 17:51:21.102499 kernel: loop3: detected capacity change from 0 to 189592 Mar 17 17:51:21.143499 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 17:51:21.152506 kernel: loop5: detected capacity change from 0 to 28720 Mar 17 17:51:21.163489 kernel: loop6: detected capacity change from 0 to 113512 Mar 17 17:51:21.174495 kernel: loop7: detected capacity change from 0 to 189592 Mar 17 17:51:21.180246 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 17 17:51:21.180677 (sd-merge)[1327]: Merged extensions into '/usr'. Mar 17 17:51:21.184140 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:51:21.184244 systemd[1]: Reloading... Mar 17 17:51:21.245541 zram_generator::config[1357]: No configuration found. Mar 17 17:51:21.556735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:21.626274 systemd[1]: Reloading finished in 441 ms. Mar 17 17:51:21.646455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:21.656791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:51:21.683866 systemd[1]: Starting ensure-sysext.service... Mar 17 17:51:21.702247 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:21.722743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:51:21.747442 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:51:21.747578 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:51:21.747808 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:51:21.748925 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:51:21.749434 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Mar 17 17:51:21.749515 systemd-tmpfiles[1434]: ACLs are not supported, ignoring. Mar 17 17:51:21.754150 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:21.754249 systemd-tmpfiles[1434]: Skipping /boot Mar 17 17:51:21.764792 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:51:21.765154 systemd-tmpfiles[1434]: Skipping /boot Mar 17 17:51:21.784936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:21.800813 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:51:21.810633 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:51:21.825894 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:51:21.838154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:51:21.853636 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:51:21.866755 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:51:21.985299 kernel: hv_vmbus: registering driver hv_balloon Mar 17 17:51:21.985391 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:51:21.985422 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 17:51:21.997502 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 17:51:21.997583 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 17:51:22.003499 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 17:51:22.014518 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 17:51:22.020269 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:51:22.021854 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:51:22.034415 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:51:22.029065 systemd[1]: Reload requested from client PID 1431 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:51:22.029075 systemd[1]: Reloading... Mar 17 17:51:22.101509 zram_generator::config[1492]: No configuration found. Mar 17 17:51:22.204428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:22.280835 systemd[1]: Reloading finished in 251 ms. Mar 17 17:51:22.291555 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:51:22.302692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:51:22.342625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:22.351792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:22.361740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:22.381808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:22.387451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:22.387599 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:22.388812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:22.396298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:22.396509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:22.404469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:22.405710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:22.417033 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:22.417380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:22.434967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:51:22.440890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:51:22.449800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:51:22.459939 systemd-networkd[1432]: lo: Link UP Mar 17 17:51:22.461524 systemd-networkd[1432]: lo: Gained carrier Mar 17 17:51:22.462765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:51:22.463970 systemd-networkd[1432]: Enumeration completed Mar 17 17:51:22.464277 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:22.464280 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:22.472457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:51:22.478234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:51:22.478441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:51:22.478714 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:51:22.486519 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:22.492980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:22.494514 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:22.501977 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:51:22.502449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:51:22.502615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:51:22.511232 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:51:22.511411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:51:22.517725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:51:22.519507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:51:22.522489 kernel: mlx5_core 8856:00:02.0 enP34902s1: Link up Mar 17 17:51:22.529072 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:51:22.529243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:51:22.539531 systemd[1]: Finished ensure-sysext.service. Mar 17 17:51:22.550504 kernel: hv_netvsc 00224878-8a1e-0022-4878-8a1e00224878 eth0: Data path switched to VF: enP34902s1 Mar 17 17:51:22.550746 systemd-networkd[1432]: enP34902s1: Link UP Mar 17 17:51:22.550823 systemd-networkd[1432]: eth0: Link UP Mar 17 17:51:22.550826 systemd-networkd[1432]: eth0: Gained carrier Mar 17 17:51:22.550840 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:22.554772 systemd-networkd[1432]: enP34902s1: Gained carrier Mar 17 17:51:22.557684 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:51:22.569623 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:51:22.569869 systemd-networkd[1432]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:22.576049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:51:22.576119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:51:22.579653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:22.806544 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1414) Mar 17 17:51:22.832442 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:51:22.846332 systemd-resolved[1452]: Positive Trust Anchors: Mar 17 17:51:22.846347 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:51:22.846379 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:51:22.880876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:51:22.891614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:51:22.952944 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:51:22.968732 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:51:23.012724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:51:23.166789 systemd-resolved[1452]: Using system hostname 'ci-4230.1.0-a-76d88708f5'. Mar 17 17:51:23.168662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:51:23.174923 systemd[1]: Reached target network.target - Network. Mar 17 17:51:23.180054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:23.271500 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:23.293916 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:51:23.301101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:23.310694 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:51:23.321368 lvm[1664]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:51:23.345900 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:51:23.630143 augenrules[1667]: No rules Mar 17 17:51:23.631423 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:51:23.631654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:51:24.253620 systemd-networkd[1432]: enP34902s1: Gained IPv6LL Mar 17 17:51:24.381615 systemd-networkd[1432]: eth0: Gained IPv6LL Mar 17 17:51:24.383705 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:51:24.390980 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:51:26.027343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:26.415367 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:51:26.422532 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:51:31.113795 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:51:31.124980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:51:31.136677 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:51:31.149962 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:51:31.156466 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:51:31.163364 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:51:31.170857 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:51:31.178136 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:51:31.183959 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:51:31.190791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:51:31.197946 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:51:31.197979 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:51:31.203156 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:51:31.210254 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:51:31.218337 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:51:31.225721 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:51:31.232901 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:51:31.240820 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:51:31.248539 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:51:31.254546 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:51:31.262907 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:51:31.269298 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:51:31.275465 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:51:31.280921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:31.280948 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:51:31.287563 systemd[1]: Starting chronyd.service - NTP client/server... Mar 17 17:51:31.296598 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:51:31.307658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:51:31.319771 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:51:31.327197 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 17 17:51:31.327589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:51:31.335645 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:51:31.341182 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:51:31.341219 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 17 17:51:31.344646 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 17 17:51:31.351366 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 17 17:51:31.354392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:31.354742 jq[1691]: false Mar 17 17:51:31.359776 chronyd[1697]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 17 17:51:31.361963 KVP[1693]: KVP starting; pid is:1693 Mar 17 17:51:31.364650 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:51:31.369115 kernel: hv_utils: KVP IC version 4.0 Mar 17 17:51:31.367512 KVP[1693]: KVP LIC Version: 3.1 Mar 17 17:51:31.376677 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:51:31.386770 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:51:31.393720 dbus-daemon[1687]: [system] SELinux support is enabled Mar 17 17:51:31.397677 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:51:31.411644 extend-filesystems[1692]: Found loop4 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found loop5 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found loop6 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found loop7 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda1 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda2 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda3 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found usr Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda4 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda6 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda7 Mar 17 17:51:31.411644 extend-filesystems[1692]: Found sda9 Mar 17 17:51:31.411644 extend-filesystems[1692]: Checking size of /dev/sda9 Mar 17 17:51:31.573705 extend-filesystems[1692]: Old size kept for /dev/sda9 Mar 17 17:51:31.573705 extend-filesystems[1692]: Found sr0 Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.473 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.478 INFO Fetch successful Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.478 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.489 INFO Fetch successful Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.490 INFO Fetching http://168.63.129.16/machine/cb54002b-2fbb-41d0-adf2-4b24b156306a/08dda23a%2De082%2D4e4d%2Db8b0%2Df15b1d13aa5e.%5Fci%2D4230.1.0%2Da%2D76d88708f5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.492 INFO Fetch successful Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.492 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:51:31.613319 coreos-metadata[1686]: Mar 17 17:51:31.506 INFO Fetch successful Mar 17 17:51:31.438320 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Mar 17 17:51:31.415690 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:51:31.438564 chronyd[1697]: Loaded seccomp filter (level 2) Mar 17 17:51:31.443095 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:51:31.453248 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:51:31.618810 update_engine[1718]: I20250317 17:51:31.564925 1718 main.cc:92] Flatcar Update Engine starting Mar 17 17:51:31.618810 update_engine[1718]: I20250317 17:51:31.574339 1718 update_check_scheduler.cc:74] Next update check in 10m31s Mar 17 17:51:31.453792 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:51:31.619065 jq[1722]: true Mar 17 17:51:31.455680 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:51:31.478619 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:51:31.493611 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:51:31.512929 systemd[1]: Started chronyd.service - NTP client/server. Mar 17 17:51:31.538948 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:51:31.539134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:51:31.539407 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:51:31.539597 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:51:31.580820 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:51:31.580993 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:51:31.604728 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:51:31.619907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:51:31.620081 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:51:31.637712 (ntainerd)[1749]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:51:31.646047 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1739) Mar 17 17:51:31.652011 jq[1748]: true Mar 17 17:51:31.670580 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:51:31.697255 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:51:31.707162 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:51:31.707262 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:51:31.707283 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:51:31.720324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:51:31.720349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:51:31.741618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:51:31.839176 systemd-logind[1714]: New seat seat0. Mar 17 17:51:31.843515 systemd-logind[1714]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:51:31.843814 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:51:31.974201 tar[1747]: linux-arm64/helm Mar 17 17:51:32.464973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:32.473840 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:32.959516 kubelet[1834]: E0317 17:51:32.859615 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:32.861799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:32.861941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:32.862251 systemd[1]: kubelet.service: Consumed 656ms CPU time, 233.1M memory peak. Mar 17 17:51:32.974081 locksmithd[1810]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:51:33.353605 sshd_keygen[1716]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:51:33.360962 tar[1747]: linux-arm64/LICENSE Mar 17 17:51:33.360962 tar[1747]: linux-arm64/README.md Mar 17 17:51:33.369737 bash[1813]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:51:33.370456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:51:33.378463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:51:33.384721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:51:33.397699 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:51:33.404034 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:51:33.405699 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 17 17:51:33.411877 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:51:33.412064 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:51:33.422155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:51:33.437901 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 17 17:51:33.582157 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:51:33.595981 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:51:33.602452 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:51:33.610227 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:51:33.791493 containerd[1749]: time="2025-03-17T17:51:33.790115660Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:51:33.813628 containerd[1749]: time="2025-03-17T17:51:33.813573660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.815023 containerd[1749]: time="2025-03-17T17:51:33.814989540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:33.815128 containerd[1749]: time="2025-03-17T17:51:33.815114140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:51:33.815184 containerd[1749]: time="2025-03-17T17:51:33.815172620Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:51:33.815403 containerd[1749]: time="2025-03-17T17:51:33.815380780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:51:33.815469 containerd[1749]: time="2025-03-17T17:51:33.815457940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.815617 containerd[1749]: time="2025-03-17T17:51:33.815598340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:33.815687 containerd[1749]: time="2025-03-17T17:51:33.815673420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.815955 containerd[1749]: time="2025-03-17T17:51:33.815932940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816018 containerd[1749]: time="2025-03-17T17:51:33.816006260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816076 containerd[1749]: time="2025-03-17T17:51:33.816063900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816119 containerd[1749]: time="2025-03-17T17:51:33.816108740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816262 containerd[1749]: time="2025-03-17T17:51:33.816244140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816562 containerd[1749]: time="2025-03-17T17:51:33.816541180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816779 containerd[1749]: time="2025-03-17T17:51:33.816760660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:51:33.816846 containerd[1749]: time="2025-03-17T17:51:33.816834780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:51:33.816996 containerd[1749]: time="2025-03-17T17:51:33.816978820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:51:33.817101 containerd[1749]: time="2025-03-17T17:51:33.817087700Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:51:34.415852 containerd[1749]: time="2025-03-17T17:51:34.415805300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:51:34.415968 containerd[1749]: time="2025-03-17T17:51:34.415871060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:51:34.415968 containerd[1749]: time="2025-03-17T17:51:34.415886900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:51:34.415968 containerd[1749]: time="2025-03-17T17:51:34.415903380Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:51:34.415968 containerd[1749]: time="2025-03-17T17:51:34.415918100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:51:34.416171 containerd[1749]: time="2025-03-17T17:51:34.416138940Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416388940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416545300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416563540Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416578420Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416592460Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416604940Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416618500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416631780Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416646420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416659740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416671620Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416682780Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416702100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.416980 containerd[1749]: time="2025-03-17T17:51:34.416716700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416728940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416742980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416754340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416766540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416778460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416791100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416803020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416816780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416827940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416838940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416851220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416865140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416884940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416898900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417253 containerd[1749]: time="2025-03-17T17:51:34.416910260Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.416970780Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.416989820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417000700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417012300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417020940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417032020Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417041660Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:51:34.417536 containerd[1749]: time="2025-03-17T17:51:34.417051900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:51:34.417664 containerd[1749]: time="2025-03-17T17:51:34.417313820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:51:34.417664 containerd[1749]: time="2025-03-17T17:51:34.417364340Z" level=info msg="Connect containerd service" Mar 17 17:51:34.417664 containerd[1749]: time="2025-03-17T17:51:34.417396580Z" level=info msg="using legacy CRI server" Mar 17 17:51:34.417664 containerd[1749]: time="2025-03-17T17:51:34.417403260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:51:34.417664 containerd[1749]: time="2025-03-17T17:51:34.417536300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:51:34.418143 containerd[1749]: time="2025-03-17T17:51:34.418108660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418281660Z" level=info msg="Start subscribing containerd event" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418327140Z" level=info msg="Start recovering state" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418387220Z" level=info msg="Start event monitor" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418397140Z" level=info msg="Start snapshots syncer" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418405460Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418411700Z" level=info msg="Start streaming server" Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418415420Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418450620Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:51:34.424861 containerd[1749]: time="2025-03-17T17:51:34.418514740Z" level=info msg="containerd successfully booted in 0.629282s" Mar 17 17:51:34.418599 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:51:34.426154 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:51:34.436529 systemd[1]: Startup finished in 642ms (kernel) + 11.784s (initrd) + 24.731s (userspace) = 37.158s. Mar 17 17:51:38.473373 login[1878]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:38.473773 login[1877]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:38.487040 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:51:38.487371 systemd-logind[1714]: New session 1 of user core. Mar 17 17:51:38.495689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:51:38.498952 systemd-logind[1714]: New session 2 of user core. Mar 17 17:51:38.504589 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:51:38.510711 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:51:38.770757 (systemd)[1889]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:51:38.773263 systemd-logind[1714]: New session c1 of user core. Mar 17 17:51:38.916239 systemd[1889]: Queued start job for default target default.target. Mar 17 17:51:38.923417 systemd[1889]: Created slice app.slice - User Application Slice. Mar 17 17:51:38.923448 systemd[1889]: Reached target paths.target - Paths. Mar 17 17:51:38.923587 systemd[1889]: Reached target timers.target - Timers. Mar 17 17:51:38.925592 systemd[1889]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:51:38.933361 systemd[1889]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:51:38.933409 systemd[1889]: Reached target sockets.target - Sockets. Mar 17 17:51:38.933447 systemd[1889]: Reached target basic.target - Basic System. Mar 17 17:51:38.933502 systemd[1889]: Reached target default.target - Main User Target. Mar 17 17:51:38.933528 systemd[1889]: Startup finished in 154ms. Mar 17 17:51:38.933757 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:51:38.942655 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:51:38.943332 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:51:43.112337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:51:43.120668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:43.601139 waagent[1875]: 2025-03-17T17:51:43.600973Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 17 17:51:43.606754 waagent[1875]: 2025-03-17T17:51:43.606676Z INFO Daemon Daemon OS: flatcar 4230.1.0 Mar 17 17:51:43.613642 waagent[1875]: 2025-03-17T17:51:43.613578Z INFO Daemon Daemon Python: 3.11.11 Mar 17 17:51:43.618227 waagent[1875]: 2025-03-17T17:51:43.618032Z INFO Daemon Daemon Run daemon Mar 17 17:51:43.622112 waagent[1875]: 2025-03-17T17:51:43.622059Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.0' Mar 17 17:51:43.631206 waagent[1875]: 2025-03-17T17:51:43.631140Z INFO Daemon Daemon Using waagent for provisioning Mar 17 17:51:43.636464 waagent[1875]: 2025-03-17T17:51:43.636416Z INFO Daemon Daemon Activate resource disk Mar 17 17:51:43.641366 waagent[1875]: 2025-03-17T17:51:43.641310Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 17:51:43.653951 waagent[1875]: 2025-03-17T17:51:43.653877Z INFO Daemon Daemon Found device: None Mar 17 17:51:43.659591 waagent[1875]: 2025-03-17T17:51:43.659526Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 17:51:43.668015 waagent[1875]: 2025-03-17T17:51:43.667957Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 17:51:43.679690 waagent[1875]: 2025-03-17T17:51:43.679640Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:51:43.685389 waagent[1875]: 2025-03-17T17:51:43.685332Z INFO Daemon Daemon Running default provisioning handler Mar 17 17:51:43.697457 waagent[1875]: 2025-03-17T17:51:43.696854Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 17 17:51:43.712670 waagent[1875]: 2025-03-17T17:51:43.712592Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 17:51:43.723026 waagent[1875]: 2025-03-17T17:51:43.722956Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 17:51:43.728551 waagent[1875]: 2025-03-17T17:51:43.728462Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 17:51:44.565797 waagent[1875]: 2025-03-17T17:51:44.565685Z INFO Daemon Daemon Successfully mounted dvd Mar 17 17:51:44.581395 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 17:51:44.582868 waagent[1875]: 2025-03-17T17:51:44.582791Z INFO Daemon Daemon Detect protocol endpoint Mar 17 17:51:44.587843 waagent[1875]: 2025-03-17T17:51:44.587774Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:51:44.593463 waagent[1875]: 2025-03-17T17:51:44.593403Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 17:51:44.599922 waagent[1875]: 2025-03-17T17:51:44.599862Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 17:51:44.607667 waagent[1875]: 2025-03-17T17:51:44.607607Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 17:51:44.612700 waagent[1875]: 2025-03-17T17:51:44.612641Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 17:51:44.732591 waagent[1875]: 2025-03-17T17:51:44.732540Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 17:51:44.739272 waagent[1875]: 2025-03-17T17:51:44.739240Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 17:51:44.744560 waagent[1875]: 2025-03-17T17:51:44.744508Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 17:51:46.607306 waagent[1875]: 2025-03-17T17:51:45.554922Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 17:51:46.607306 waagent[1875]: 2025-03-17T17:51:45.561478Z INFO Daemon Daemon Forcing an update of the goal state. Mar 17 17:51:46.607306 waagent[1875]: 2025-03-17T17:51:45.570744Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:51:46.670886 waagent[1875]: 2025-03-17T17:51:46.665779Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 17 17:51:46.671645 waagent[1875]: 2025-03-17T17:51:46.671592Z INFO Daemon Mar 17 17:51:46.674362 waagent[1875]: 2025-03-17T17:51:46.674318Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 39e4957d-fcb7-488b-bda9-b451a2940d87 eTag: 9143414344336922916 source: Fabric] Mar 17 17:51:46.686056 waagent[1875]: 2025-03-17T17:51:46.686010Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 17 17:51:46.693046 waagent[1875]: 2025-03-17T17:51:46.692999Z INFO Daemon Mar 17 17:51:46.695716 waagent[1875]: 2025-03-17T17:51:46.695672Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:51:46.706964 waagent[1875]: 2025-03-17T17:51:46.706928Z INFO Daemon Daemon Downloading artifacts profile blob Mar 17 17:51:46.790693 waagent[1875]: 2025-03-17T17:51:46.790605Z INFO Daemon Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:51:46.800249 waagent[1875]: 2025-03-17T17:51:46.800200Z INFO Daemon Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:51:46.810771 waagent[1875]: 2025-03-17T17:51:46.810264Z INFO Daemon Fetch goal state completed Mar 17 17:51:46.822047 waagent[1875]: 2025-03-17T17:51:46.821994Z INFO Daemon Daemon Starting provisioning Mar 17 17:51:46.827553 waagent[1875]: 2025-03-17T17:51:46.827486Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 17:51:46.832358 waagent[1875]: 2025-03-17T17:51:46.832308Z INFO Daemon Daemon Set hostname [ci-4230.1.0-a-76d88708f5] Mar 17 17:51:48.621494 waagent[1875]: 2025-03-17T17:51:48.619240Z INFO Daemon Daemon Publish hostname [ci-4230.1.0-a-76d88708f5] Mar 17 17:51:48.625973 waagent[1875]: 2025-03-17T17:51:48.625895Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 17:51:48.632139 waagent[1875]: 2025-03-17T17:51:48.632086Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 17:51:48.644631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:48.646426 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:48.647959 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:48.647964 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:48.647990 systemd-networkd[1432]: eth0: DHCP lease lost Mar 17 17:51:48.654403 waagent[1875]: 2025-03-17T17:51:48.649788Z INFO Daemon Daemon Create user account if not exists Mar 17 17:51:48.657496 waagent[1875]: 2025-03-17T17:51:48.655932Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 17:51:48.662654 waagent[1875]: 2025-03-17T17:51:48.662567Z INFO Daemon Daemon Configure sudoer Mar 17 17:51:48.667641 waagent[1875]: 2025-03-17T17:51:48.667561Z INFO Daemon Daemon Configure sshd Mar 17 17:51:48.674886 waagent[1875]: 2025-03-17T17:51:48.673767Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 17 17:51:48.687894 waagent[1875]: 2025-03-17T17:51:48.687811Z INFO Daemon Daemon Deploy ssh public key. Mar 17 17:51:48.693559 systemd-networkd[1432]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:51:48.710978 kubelet[1948]: E0317 17:51:48.710935 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:48.714435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:48.714595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:48.714851 systemd[1]: kubelet.service: Consumed 117ms CPU time, 96.9M memory peak. Mar 17 17:51:48.801512 waagent[1875]: 2025-03-17T17:51:48.796844Z INFO Daemon Daemon Provisioning complete Mar 17 17:51:48.814808 waagent[1875]: 2025-03-17T17:51:48.814759Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 17:51:48.821079 waagent[1875]: 2025-03-17T17:51:48.821022Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 17:51:48.831355 waagent[1875]: 2025-03-17T17:51:48.831303Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 17 17:51:48.959062 waagent[1961]: 2025-03-17T17:51:48.958471Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 17 17:51:48.959062 waagent[1961]: 2025-03-17T17:51:48.958645Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.0 Mar 17 17:51:48.959062 waagent[1961]: 2025-03-17T17:51:48.958698Z INFO ExtHandler ExtHandler Python: 3.11.11 Mar 17 17:51:48.996166 waagent[1961]: 2025-03-17T17:51:48.996082Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 17:51:48.996512 waagent[1961]: 2025-03-17T17:51:48.996456Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:48.996674 waagent[1961]: 2025-03-17T17:51:48.996640Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:49.007798 waagent[1961]: 2025-03-17T17:51:49.007731Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:51:49.013647 waagent[1961]: 2025-03-17T17:51:49.013602Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 17:51:49.015497 waagent[1961]: 2025-03-17T17:51:49.014266Z INFO ExtHandler Mar 17 17:51:49.015497 waagent[1961]: 2025-03-17T17:51:49.014342Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d27b837d-b4af-4bf9-91b3-612bc1a8a395 eTag: 9143414344336922916 source: Fabric] Mar 17 17:51:49.015497 waagent[1961]: 2025-03-17T17:51:49.014628Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:51:49.015497 waagent[1961]: 2025-03-17T17:51:49.015145Z INFO ExtHandler Mar 17 17:51:49.015497 waagent[1961]: 2025-03-17T17:51:49.015210Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:51:49.019167 waagent[1961]: 2025-03-17T17:51:49.019131Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:51:49.105516 waagent[1961]: 2025-03-17T17:51:49.105264Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:51:49.105811 waagent[1961]: 2025-03-17T17:51:49.105765Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:51:49.106210 waagent[1961]: 2025-03-17T17:51:49.106170Z INFO ExtHandler Fetch goal state completed Mar 17 17:51:49.123658 waagent[1961]: 2025-03-17T17:51:49.123605Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1961 Mar 17 17:51:49.123807 waagent[1961]: 2025-03-17T17:51:49.123770Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 17 17:51:49.125386 waagent[1961]: 2025-03-17T17:51:49.125340Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.0', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 17:51:49.125777 waagent[1961]: 2025-03-17T17:51:49.125739Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 17:51:49.163810 waagent[1961]: 2025-03-17T17:51:49.163766Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 17:51:49.164004 waagent[1961]: 2025-03-17T17:51:49.163965Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 17:51:49.169285 waagent[1961]: 2025-03-17T17:51:49.169248Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 17:51:49.175033 systemd[1]: Reload requested from client PID 1976 ('systemctl') (unit waagent.service)... Mar 17 17:51:49.175046 systemd[1]: Reloading... Mar 17 17:51:49.275714 zram_generator::config[2033]: No configuration found. Mar 17 17:51:49.353542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:49.454297 systemd[1]: Reloading finished in 278 ms. Mar 17 17:51:49.468123 waagent[1961]: 2025-03-17T17:51:49.467773Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 17 17:51:49.474129 systemd[1]: Reload requested from client PID 2069 ('systemctl') (unit waagent.service)... Mar 17 17:51:49.474143 systemd[1]: Reloading... Mar 17 17:51:49.564506 zram_generator::config[2111]: No configuration found. Mar 17 17:51:49.664893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:51:49.765803 systemd[1]: Reloading finished in 291 ms. Mar 17 17:51:49.787125 waagent[1961]: 2025-03-17T17:51:49.782970Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 17 17:51:49.787125 waagent[1961]: 2025-03-17T17:51:49.783157Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 17 17:51:50.043527 waagent[1961]: 2025-03-17T17:51:50.043383Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 17:51:50.044099 waagent[1961]: 2025-03-17T17:51:50.044027Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 17 17:51:50.044913 waagent[1961]: 2025-03-17T17:51:50.044827Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 17:51:50.045067 waagent[1961]: 2025-03-17T17:51:50.044962Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:50.045306 waagent[1961]: 2025-03-17T17:51:50.045127Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:50.045550 waagent[1961]: 2025-03-17T17:51:50.045468Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 17:51:50.045774 waagent[1961]: 2025-03-17T17:51:50.045694Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 17:51:50.045857 waagent[1961]: 2025-03-17T17:51:50.045820Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:51:50.045926 waagent[1961]: 2025-03-17T17:51:50.045897Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:51:50.046059 waagent[1961]: 2025-03-17T17:51:50.046024Z INFO EnvHandler ExtHandler Configure routes Mar 17 17:51:50.046118 waagent[1961]: 2025-03-17T17:51:50.046090Z INFO EnvHandler ExtHandler Gateway:None Mar 17 17:51:50.046169 waagent[1961]: 2025-03-17T17:51:50.046141Z INFO EnvHandler ExtHandler Routes:None Mar 17 17:51:50.047130 waagent[1961]: 2025-03-17T17:51:50.047075Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 17:51:50.047130 waagent[1961]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 17:51:50.047130 waagent[1961]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 17:51:50.047130 waagent[1961]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 17:51:50.047130 waagent[1961]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:50.047130 waagent[1961]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:50.047130 waagent[1961]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:51:50.048649 waagent[1961]: 2025-03-17T17:51:50.047201Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 17:51:50.048649 waagent[1961]: 2025-03-17T17:51:50.047529Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 17:51:50.048932 waagent[1961]: 2025-03-17T17:51:50.048791Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 17:51:50.048932 waagent[1961]: 2025-03-17T17:51:50.048869Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 17:51:50.049062 waagent[1961]: 2025-03-17T17:51:50.049013Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 17:51:50.059694 waagent[1961]: 2025-03-17T17:51:50.059652Z INFO ExtHandler ExtHandler Mar 17 17:51:50.060517 waagent[1961]: 2025-03-17T17:51:50.059865Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b4d723d2-e0f2-4a28-95e7-80d8c8c65285 correlation 4a47988d-c366-4467-9978-4cee7d47e3c8 created: 2025-03-17T17:50:10.986717Z] Mar 17 17:51:50.060517 waagent[1961]: 2025-03-17T17:51:50.060247Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:51:50.060878 waagent[1961]: 2025-03-17T17:51:50.060834Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 17 17:51:50.099260 waagent[1961]: 2025-03-17T17:51:50.099196Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 681256CA-5717-4254-8DDA-97C3072CDF02;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 17 17:51:50.112596 waagent[1961]: 2025-03-17T17:51:50.112530Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 17:51:50.112596 waagent[1961]: Executing ['ip', '-a', '-o', 'link']: Mar 17 17:51:50.112596 waagent[1961]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 17:51:50.112596 waagent[1961]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:8a:1e brd ff:ff:ff:ff:ff:ff Mar 17 17:51:50.112596 waagent[1961]: 3: enP34902s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:8a:1e brd ff:ff:ff:ff:ff:ff\ altname enP34902p0s2 Mar 17 17:51:50.112596 waagent[1961]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 17:51:50.112596 waagent[1961]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 17:51:50.112596 waagent[1961]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 17:51:50.112596 waagent[1961]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 17:51:50.112596 waagent[1961]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 17 17:51:50.112596 waagent[1961]: 2: eth0 inet6 fe80::222:48ff:fe78:8a1e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:51:50.112596 waagent[1961]: 3: enP34902s1 inet6 fe80::222:48ff:fe78:8a1e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:51:50.169533 waagent[1961]: 2025-03-17T17:51:50.169203Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 17 17:51:50.169533 waagent[1961]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.169533 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.169533 waagent[1961]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.169533 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.169533 waagent[1961]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.169533 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.169533 waagent[1961]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:51:50.169533 waagent[1961]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:51:50.169533 waagent[1961]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:51:50.172113 waagent[1961]: 2025-03-17T17:51:50.172050Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 17:51:50.172113 waagent[1961]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.172113 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.172113 waagent[1961]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.172113 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.172113 waagent[1961]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:51:50.172113 waagent[1961]: pkts bytes target prot opt in out source destination Mar 17 17:51:50.172113 waagent[1961]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:51:50.172113 waagent[1961]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:51:50.172113 waagent[1961]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:51:50.172350 waagent[1961]: 2025-03-17T17:51:50.172313Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 17:51:55.231347 chronyd[1697]: Selected source PHC0 Mar 17 17:51:58.942390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:51:58.948652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:51:59.195854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:51:59.198881 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:51:59.237399 kubelet[2204]: E0317 17:51:59.237302 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:51:59.239452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:51:59.239622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:51:59.240053 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.9M memory peak. Mar 17 17:52:09.442582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:52:09.450638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:09.724727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:09.728342 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:09.760504 kubelet[2219]: E0317 17:52:09.760254 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:09.762377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:09.762543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:09.763003 systemd[1]: kubelet.service: Consumed 104ms CPU time, 94M memory peak. Mar 17 17:52:10.134251 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 17:52:17.297573 update_engine[1718]: I20250317 17:52:17.297508 1718 update_attempter.cc:509] Updating boot flags... Mar 17 17:52:17.344555 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2242) Mar 17 17:52:17.442628 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2232) Mar 17 17:52:18.214508 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:52:18.215761 systemd[1]: Started sshd@0-10.200.20.19:22-10.200.16.10:41890.service - OpenSSH per-connection server daemon (10.200.16.10:41890). Mar 17 17:52:18.763896 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 41890 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:18.765085 sshd-session[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:18.770324 systemd-logind[1714]: New session 3 of user core. Mar 17 17:52:18.775661 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:52:19.167694 systemd[1]: Started sshd@1-10.200.20.19:22-10.200.16.10:34120.service - OpenSSH per-connection server daemon (10.200.16.10:34120). Mar 17 17:52:19.610148 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 34120 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:19.611358 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:19.616537 systemd-logind[1714]: New session 4 of user core. Mar 17 17:52:19.621614 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:52:19.862098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:52:19.872828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:19.932193 sshd[2349]: Connection closed by 10.200.16.10 port 34120 Mar 17 17:52:19.932779 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:19.936079 systemd[1]: sshd@1-10.200.20.19:22-10.200.16.10:34120.service: Deactivated successfully. Mar 17 17:52:19.937933 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:52:19.938817 systemd-logind[1714]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:52:19.939983 systemd-logind[1714]: Removed session 4. Mar 17 17:52:20.013174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:20.016863 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:20.019561 systemd[1]: Started sshd@2-10.200.20.19:22-10.200.16.10:34126.service - OpenSSH per-connection server daemon (10.200.16.10:34126). Mar 17 17:52:20.053439 kubelet[2362]: E0317 17:52:20.053376 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:20.055736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:20.055886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:20.056247 systemd[1]: kubelet.service: Consumed 110ms CPU time, 94.3M memory peak. Mar 17 17:52:20.502460 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 34126 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:20.503722 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:20.507569 systemd-logind[1714]: New session 5 of user core. Mar 17 17:52:20.514644 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:52:20.855273 sshd[2372]: Connection closed by 10.200.16.10 port 34126 Mar 17 17:52:20.855131 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:20.858380 systemd[1]: sshd@2-10.200.20.19:22-10.200.16.10:34126.service: Deactivated successfully. Mar 17 17:52:20.860123 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:52:20.861449 systemd-logind[1714]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:52:20.862346 systemd-logind[1714]: Removed session 5. Mar 17 17:52:20.941512 systemd[1]: Started sshd@3-10.200.20.19:22-10.200.16.10:34132.service - OpenSSH per-connection server daemon (10.200.16.10:34132). Mar 17 17:52:21.425587 sshd[2378]: Accepted publickey for core from 10.200.16.10 port 34132 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:21.426781 sshd-session[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:21.432010 systemd-logind[1714]: New session 6 of user core. Mar 17 17:52:21.437655 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:52:21.776576 sshd[2380]: Connection closed by 10.200.16.10 port 34132 Mar 17 17:52:21.777100 sshd-session[2378]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:21.779758 systemd-logind[1714]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:52:21.781361 systemd[1]: sshd@3-10.200.20.19:22-10.200.16.10:34132.service: Deactivated successfully. Mar 17 17:52:21.783010 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:52:21.785018 systemd-logind[1714]: Removed session 6. Mar 17 17:52:21.856829 systemd[1]: Started sshd@4-10.200.20.19:22-10.200.16.10:34148.service - OpenSSH per-connection server daemon (10.200.16.10:34148). Mar 17 17:52:22.308134 sshd[2386]: Accepted publickey for core from 10.200.16.10 port 34148 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:22.309306 sshd-session[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:22.314517 systemd-logind[1714]: New session 7 of user core. Mar 17 17:52:22.320647 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:52:22.691872 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:52:22.692129 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:22.717624 sudo[2389]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:22.788211 sshd[2388]: Connection closed by 10.200.16.10 port 34148 Mar 17 17:52:22.788075 sshd-session[2386]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:22.791072 systemd-logind[1714]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:52:22.791335 systemd[1]: sshd@4-10.200.20.19:22-10.200.16.10:34148.service: Deactivated successfully. Mar 17 17:52:22.792886 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:52:22.794356 systemd-logind[1714]: Removed session 7. Mar 17 17:52:22.877929 systemd[1]: Started sshd@5-10.200.20.19:22-10.200.16.10:34164.service - OpenSSH per-connection server daemon (10.200.16.10:34164). Mar 17 17:52:23.368153 sshd[2395]: Accepted publickey for core from 10.200.16.10 port 34164 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:23.369366 sshd-session[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:23.373507 systemd-logind[1714]: New session 8 of user core. Mar 17 17:52:23.379597 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:52:23.641296 sudo[2399]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:52:23.642042 sudo[2399]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:23.645081 sudo[2399]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:23.649109 sudo[2398]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:52:23.649340 sudo[2398]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:23.666892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:52:23.687097 augenrules[2421]: No rules Mar 17 17:52:23.688181 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:52:23.688504 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:52:23.689693 sudo[2398]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:23.770893 sshd[2397]: Connection closed by 10.200.16.10 port 34164 Mar 17 17:52:23.771606 sshd-session[2395]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:23.775080 systemd[1]: sshd@5-10.200.20.19:22-10.200.16.10:34164.service: Deactivated successfully. Mar 17 17:52:23.776598 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:52:23.777265 systemd-logind[1714]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:52:23.778087 systemd-logind[1714]: Removed session 8. Mar 17 17:52:23.863599 systemd[1]: Started sshd@6-10.200.20.19:22-10.200.16.10:34176.service - OpenSSH per-connection server daemon (10.200.16.10:34176). Mar 17 17:52:24.352960 sshd[2430]: Accepted publickey for core from 10.200.16.10 port 34176 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:52:24.354145 sshd-session[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:24.358400 systemd-logind[1714]: New session 9 of user core. Mar 17 17:52:24.367596 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:52:24.626091 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:52:24.626635 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:25.866698 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:52:25.866767 (dockerd)[2450]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:52:26.563347 dockerd[2450]: time="2025-03-17T17:52:26.563297211Z" level=info msg="Starting up" Mar 17 17:52:26.820541 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport404441257-merged.mount: Deactivated successfully. Mar 17 17:52:26.888282 dockerd[2450]: time="2025-03-17T17:52:26.888246882Z" level=info msg="Loading containers: start." Mar 17 17:52:27.078499 kernel: Initializing XFRM netlink socket Mar 17 17:52:27.167594 systemd-networkd[1432]: docker0: Link UP Mar 17 17:52:27.206556 dockerd[2450]: time="2025-03-17T17:52:27.206515427Z" level=info msg="Loading containers: done." Mar 17 17:52:27.231127 dockerd[2450]: time="2025-03-17T17:52:27.230729170Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:52:27.231127 dockerd[2450]: time="2025-03-17T17:52:27.230828930Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:52:27.231127 dockerd[2450]: time="2025-03-17T17:52:27.230946610Z" level=info msg="Daemon has completed initialization" Mar 17 17:52:27.272464 dockerd[2450]: time="2025-03-17T17:52:27.272389490Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:52:27.272909 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:52:28.392906 containerd[1749]: time="2025-03-17T17:52:28.392809515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:52:29.171026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417147347.mount: Deactivated successfully. Mar 17 17:52:30.192666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:52:30.200078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:30.302659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:30.307317 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:30.349050 kubelet[2697]: E0317 17:52:30.348667 2697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:30.351264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:30.351581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:30.353035 systemd[1]: kubelet.service: Consumed 116ms CPU time, 94.2M memory peak. Mar 17 17:52:30.698507 containerd[1749]: time="2025-03-17T17:52:30.697693890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:30.711175 containerd[1749]: time="2025-03-17T17:52:30.711088379Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552766" Mar 17 17:52:30.714097 containerd[1749]: time="2025-03-17T17:52:30.714035301Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:30.726719 containerd[1749]: time="2025-03-17T17:52:30.726651670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:30.728150 containerd[1749]: time="2025-03-17T17:52:30.727739071Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.334892276s" Mar 17 17:52:30.728150 containerd[1749]: time="2025-03-17T17:52:30.727776511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 17:52:30.728385 containerd[1749]: time="2025-03-17T17:52:30.728358111Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:52:32.297649 containerd[1749]: time="2025-03-17T17:52:32.297589490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.301293 containerd[1749]: time="2025-03-17T17:52:32.301245533Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458978" Mar 17 17:52:32.304160 containerd[1749]: time="2025-03-17T17:52:32.304113055Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.315672 containerd[1749]: time="2025-03-17T17:52:32.315618703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.317012 containerd[1749]: time="2025-03-17T17:52:32.316722743Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.588333872s" Mar 17 17:52:32.317012 containerd[1749]: time="2025-03-17T17:52:32.316752424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 17:52:32.317351 containerd[1749]: time="2025-03-17T17:52:32.317330984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:52:33.493771 containerd[1749]: time="2025-03-17T17:52:33.493717304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:33.500543 containerd[1749]: time="2025-03-17T17:52:33.500276951Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125829" Mar 17 17:52:33.504357 containerd[1749]: time="2025-03-17T17:52:33.504323715Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:33.518718 containerd[1749]: time="2025-03-17T17:52:33.518645011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:33.519774 containerd[1749]: time="2025-03-17T17:52:33.519648252Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.202230468s" Mar 17 17:52:33.519774 containerd[1749]: time="2025-03-17T17:52:33.519684332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 17:52:33.520597 containerd[1749]: time="2025-03-17T17:52:33.520294933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:52:34.995375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184566237.mount: Deactivated successfully. Mar 17 17:52:35.330674 containerd[1749]: time="2025-03-17T17:52:35.330549994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:35.332960 containerd[1749]: time="2025-03-17T17:52:35.332786397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871915" Mar 17 17:52:35.338291 containerd[1749]: time="2025-03-17T17:52:35.338242402Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:35.344469 containerd[1749]: time="2025-03-17T17:52:35.344394809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:35.345452 containerd[1749]: time="2025-03-17T17:52:35.345277770Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.824954237s" Mar 17 17:52:35.345452 containerd[1749]: time="2025-03-17T17:52:35.345315650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 17:52:35.346145 containerd[1749]: time="2025-03-17T17:52:35.346114411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:52:36.017436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426575708.mount: Deactivated successfully. Mar 17 17:52:37.073738 containerd[1749]: time="2025-03-17T17:52:37.073687161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.078539 containerd[1749]: time="2025-03-17T17:52:37.078505806Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 17 17:52:37.086920 containerd[1749]: time="2025-03-17T17:52:37.086874735Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.094561 containerd[1749]: time="2025-03-17T17:52:37.094509503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.095564 containerd[1749]: time="2025-03-17T17:52:37.095539224Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.749386373s" Mar 17 17:52:37.095736 containerd[1749]: time="2025-03-17T17:52:37.095641664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:52:37.096132 containerd[1749]: time="2025-03-17T17:52:37.096109345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:52:37.634829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899073172.mount: Deactivated successfully. Mar 17 17:52:37.663525 containerd[1749]: time="2025-03-17T17:52:37.663067345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.668368 containerd[1749]: time="2025-03-17T17:52:37.668108791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 17 17:52:37.676258 containerd[1749]: time="2025-03-17T17:52:37.676206039Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.682220 containerd[1749]: time="2025-03-17T17:52:37.682153006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:37.684490 containerd[1749]: time="2025-03-17T17:52:37.684323248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 588.181423ms" Mar 17 17:52:37.684490 containerd[1749]: time="2025-03-17T17:52:37.684368048Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:52:37.687017 containerd[1749]: time="2025-03-17T17:52:37.686844371Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:52:38.440853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090941738.mount: Deactivated successfully. Mar 17 17:52:40.442401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:52:40.448889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:40.550657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:40.554746 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:40.591839 kubelet[2829]: E0317 17:52:40.591455 2829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:40.594367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:40.594528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:40.594793 systemd[1]: kubelet.service: Consumed 112ms CPU time, 94.1M memory peak. Mar 17 17:52:41.077641 containerd[1749]: time="2025-03-17T17:52:41.077585283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:41.080844 containerd[1749]: time="2025-03-17T17:52:41.080797446Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Mar 17 17:52:41.084508 containerd[1749]: time="2025-03-17T17:52:41.084454770Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:41.094032 containerd[1749]: time="2025-03-17T17:52:41.093983420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:41.095254 containerd[1749]: time="2025-03-17T17:52:41.095118541Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.40824069s" Mar 17 17:52:41.095254 containerd[1749]: time="2025-03-17T17:52:41.095151541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 17:52:46.815098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:46.815622 systemd[1]: kubelet.service: Consumed 112ms CPU time, 94.1M memory peak. Mar 17 17:52:46.824692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:46.853576 systemd[1]: Reload requested from client PID 2864 ('systemctl') (unit session-9.scope)... Mar 17 17:52:46.853593 systemd[1]: Reloading... Mar 17 17:52:46.963599 zram_generator::config[2914]: No configuration found. Mar 17 17:52:47.061429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:47.162324 systemd[1]: Reloading finished in 308 ms. Mar 17 17:52:47.208006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:47.211840 (kubelet)[2968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:47.216638 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:47.217461 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:47.217789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:47.217888 systemd[1]: kubelet.service: Consumed 78ms CPU time, 84M memory peak. Mar 17 17:52:47.224757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:47.308081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:47.316755 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:47.349514 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:47.349514 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:47.349514 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:47.349514 kubelet[2985]: I0317 17:52:47.348971 2985 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:48.351496 kubelet[2985]: I0317 17:52:48.351299 2985 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:52:48.351496 kubelet[2985]: I0317 17:52:48.351328 2985 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:48.352507 kubelet[2985]: I0317 17:52:48.352044 2985 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:52:48.371150 kubelet[2985]: E0317 17:52:48.371115 2985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:48.371620 kubelet[2985]: I0317 17:52:48.371514 2985 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:48.377142 kubelet[2985]: E0317 17:52:48.377100 2985 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:52:48.377142 kubelet[2985]: I0317 17:52:48.377134 2985 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:52:48.380958 kubelet[2985]: I0317 17:52:48.380935 2985 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:48.381053 kubelet[2985]: I0317 17:52:48.381037 2985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:52:48.381179 kubelet[2985]: I0317 17:52:48.381154 2985 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:48.381342 kubelet[2985]: I0317 17:52:48.381179 2985 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-76d88708f5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:52:48.381426 kubelet[2985]: I0317 17:52:48.381350 2985 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:48.381426 kubelet[2985]: I0317 17:52:48.381359 2985 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:52:48.381496 kubelet[2985]: I0317 17:52:48.381467 2985 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:48.383225 kubelet[2985]: I0317 17:52:48.383006 2985 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:52:48.383225 kubelet[2985]: I0317 17:52:48.383030 2985 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:48.383225 kubelet[2985]: I0317 17:52:48.383053 2985 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:52:48.383225 kubelet[2985]: I0317 17:52:48.383063 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:48.385859 kubelet[2985]: W0317 17:52:48.385818 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-76d88708f5&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:48.386382 kubelet[2985]: E0317 17:52:48.386186 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-76d88708f5&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:48.386382 kubelet[2985]: I0317 17:52:48.386274 2985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:48.387777 kubelet[2985]: I0317 17:52:48.387757 2985 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:48.388511 kubelet[2985]: W0317 17:52:48.388247 2985 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:52:48.389592 kubelet[2985]: I0317 17:52:48.389576 2985 server.go:1269] "Started kubelet" Mar 17 17:52:48.390452 kubelet[2985]: W0317 17:52:48.390396 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:48.390538 kubelet[2985]: E0317 17:52:48.390459 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:48.390607 kubelet[2985]: I0317 17:52:48.390579 2985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:48.391470 kubelet[2985]: I0317 17:52:48.391449 2985 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:52:48.394599 kubelet[2985]: I0317 17:52:48.394542 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:48.395091 kubelet[2985]: I0317 17:52:48.394905 2985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:48.397149 kubelet[2985]: I0317 17:52:48.397122 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:48.398294 kubelet[2985]: E0317 17:52:48.396843 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-76d88708f5.182da890417a7817 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-76d88708f5,UID:ci-4230.1.0-a-76d88708f5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-76d88708f5,},FirstTimestamp:2025-03-17 17:52:48.389552151 +0000 UTC m=+1.069622358,LastTimestamp:2025-03-17 17:52:48.389552151 +0000 UTC m=+1.069622358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-76d88708f5,}" Mar 17 17:52:48.399521 kubelet[2985]: I0317 17:52:48.399501 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:52:48.401160 kubelet[2985]: I0317 17:52:48.401130 2985 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:52:48.404735 kubelet[2985]: E0317 17:52:48.401528 2985 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-76d88708f5\" not found" Mar 17 17:52:48.404735 kubelet[2985]: W0317 17:52:48.403894 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:48.404839 kubelet[2985]: E0317 17:52:48.404758 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:48.404839 kubelet[2985]: I0317 17:52:48.404172 2985 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:48.405163 kubelet[2985]: I0317 17:52:48.404922 2985 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:48.405163 kubelet[2985]: E0317 17:52:48.403961 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-76d88708f5?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="200ms" Mar 17 17:52:48.406592 kubelet[2985]: E0317 17:52:48.406565 2985 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:52:48.406952 kubelet[2985]: I0317 17:52:48.406929 2985 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:48.406952 kubelet[2985]: I0317 17:52:48.406947 2985 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:48.407426 kubelet[2985]: I0317 17:52:48.407394 2985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:52:48.415294 kubelet[2985]: I0317 17:52:48.415214 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:48.416190 kubelet[2985]: I0317 17:52:48.416164 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:48.416190 kubelet[2985]: I0317 17:52:48.416191 2985 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:48.416256 kubelet[2985]: I0317 17:52:48.416209 2985 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:52:48.416278 kubelet[2985]: E0317 17:52:48.416252 2985 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:48.421988 kubelet[2985]: W0317 17:52:48.421889 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:48.421988 kubelet[2985]: E0317 17:52:48.421938 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:48.434010 kubelet[2985]: I0317 17:52:48.433828 2985 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:48.434010 kubelet[2985]: I0317 17:52:48.433913 2985 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:48.434010 kubelet[2985]: I0317 17:52:48.433938 2985 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:48.438809 kubelet[2985]: I0317 17:52:48.438655 2985 policy_none.go:49] "None policy: Start" Mar 17 17:52:48.439246 kubelet[2985]: I0317 17:52:48.439217 2985 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:48.439246 kubelet[2985]: I0317 17:52:48.439241 2985 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:48.450344 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:52:48.461186 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:52:48.464911 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:52:48.475754 kubelet[2985]: I0317 17:52:48.475241 2985 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:48.475754 kubelet[2985]: I0317 17:52:48.475439 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:52:48.475754 kubelet[2985]: I0317 17:52:48.475448 2985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:48.475754 kubelet[2985]: I0317 17:52:48.475733 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:48.477086 kubelet[2985]: E0317 17:52:48.477007 2985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.0-a-76d88708f5\" not found" Mar 17 17:52:48.525833 systemd[1]: Created slice kubepods-burstable-pod4229d0629e935ef5f03f36b66fd6eaef.slice - libcontainer container kubepods-burstable-pod4229d0629e935ef5f03f36b66fd6eaef.slice. Mar 17 17:52:48.544656 systemd[1]: Created slice kubepods-burstable-pode30d485287c91a15a79e8f280854a32b.slice - libcontainer container kubepods-burstable-pode30d485287c91a15a79e8f280854a32b.slice. Mar 17 17:52:48.554840 systemd[1]: Created slice kubepods-burstable-pod1b1eb155b686705a0cb518ed0019703c.slice - libcontainer container kubepods-burstable-pod1b1eb155b686705a0cb518ed0019703c.slice. Mar 17 17:52:48.576965 kubelet[2985]: I0317 17:52:48.576936 2985 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.577447 kubelet[2985]: E0317 17:52:48.577422 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.606076 kubelet[2985]: E0317 17:52:48.605979 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-76d88708f5?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="400ms" Mar 17 17:52:48.705926 kubelet[2985]: I0317 17:52:48.705807 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.705926 kubelet[2985]: I0317 17:52:48.705847 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.705926 kubelet[2985]: I0317 17:52:48.705864 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.705926 kubelet[2985]: I0317 17:52:48.705878 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4229d0629e935ef5f03f36b66fd6eaef-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-76d88708f5\" (UID: \"4229d0629e935ef5f03f36b66fd6eaef\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.705926 kubelet[2985]: I0317 17:52:48.705894 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.706137 kubelet[2985]: I0317 17:52:48.705910 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.706137 kubelet[2985]: I0317 17:52:48.705924 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.706137 kubelet[2985]: I0317 17:52:48.705943 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.706137 kubelet[2985]: I0317 17:52:48.705958 2985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.779506 kubelet[2985]: I0317 17:52:48.779416 2985 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.779739 kubelet[2985]: E0317 17:52:48.779710 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:48.843380 containerd[1749]: time="2025-03-17T17:52:48.843339357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-76d88708f5,Uid:4229d0629e935ef5f03f36b66fd6eaef,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:48.853411 containerd[1749]: time="2025-03-17T17:52:48.853334406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-76d88708f5,Uid:e30d485287c91a15a79e8f280854a32b,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:48.858249 containerd[1749]: time="2025-03-17T17:52:48.858147451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-76d88708f5,Uid:1b1eb155b686705a0cb518ed0019703c,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:49.007298 kubelet[2985]: E0317 17:52:49.007254 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-76d88708f5?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="800ms" Mar 17 17:52:49.181456 kubelet[2985]: I0317 17:52:49.181087 2985 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:49.181456 kubelet[2985]: E0317 17:52:49.181381 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:49.408873 kubelet[2985]: E0317 17:52:49.408769 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.0-a-76d88708f5.182da890417a7817 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.0-a-76d88708f5,UID:ci-4230.1.0-a-76d88708f5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.0-a-76d88708f5,},FirstTimestamp:2025-03-17 17:52:48.389552151 +0000 UTC m=+1.069622358,LastTimestamp:2025-03-17 17:52:48.389552151 +0000 UTC m=+1.069622358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.0-a-76d88708f5,}" Mar 17 17:52:49.533402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068358769.mount: Deactivated successfully. Mar 17 17:52:49.552906 kubelet[2985]: W0317 17:52:49.552829 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:49.552906 kubelet[2985]: E0317 17:52:49.552872 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:49.569515 containerd[1749]: time="2025-03-17T17:52:49.568702727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:49.578301 containerd[1749]: time="2025-03-17T17:52:49.578253215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:49.590256 containerd[1749]: time="2025-03-17T17:52:49.590209866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:52:49.593233 containerd[1749]: time="2025-03-17T17:52:49.593203628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:49.605428 containerd[1749]: time="2025-03-17T17:52:49.605388479Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:49.610353 containerd[1749]: time="2025-03-17T17:52:49.610300324Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:49.616706 containerd[1749]: time="2025-03-17T17:52:49.616665409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:49.619661 containerd[1749]: time="2025-03-17T17:52:49.619611652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:49.620723 containerd[1749]: time="2025-03-17T17:52:49.620461493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 777.044456ms" Mar 17 17:52:49.623002 containerd[1749]: time="2025-03-17T17:52:49.622968135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 764.751444ms" Mar 17 17:52:49.631199 containerd[1749]: time="2025-03-17T17:52:49.631159022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 777.757616ms" Mar 17 17:52:49.655154 kubelet[2985]: W0317 17:52:49.655033 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:49.655154 kubelet[2985]: E0317 17:52:49.655127 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:49.808555 kubelet[2985]: E0317 17:52:49.808411 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-76d88708f5?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="1.6s" Mar 17 17:52:49.908391 kubelet[2985]: W0317 17:52:49.908230 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-76d88708f5&limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:49.908391 kubelet[2985]: E0317 17:52:49.908311 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.0-a-76d88708f5&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:49.916052 kubelet[2985]: W0317 17:52:49.916020 2985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.19:6443: connect: connection refused Mar 17 17:52:49.916152 kubelet[2985]: E0317 17:52:49.916063 2985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:49.983947 kubelet[2985]: I0317 17:52:49.983897 2985 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:49.984214 kubelet[2985]: E0317 17:52:49.984188 2985 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:50.491035 kubelet[2985]: E0317 17:52:50.490994 2985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:52:50.866912 containerd[1749]: time="2025-03-17T17:52:50.865764802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:50.867626 containerd[1749]: time="2025-03-17T17:52:50.867498324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:50.867626 containerd[1749]: time="2025-03-17T17:52:50.867551564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.868991 containerd[1749]: time="2025-03-17T17:52:50.868885805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.870777 containerd[1749]: time="2025-03-17T17:52:50.870627687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:50.871008 containerd[1749]: time="2025-03-17T17:52:50.870682567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:50.871008 containerd[1749]: time="2025-03-17T17:52:50.870876247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.871246 containerd[1749]: time="2025-03-17T17:52:50.871131768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.890212 containerd[1749]: time="2025-03-17T17:52:50.890109708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:50.890212 containerd[1749]: time="2025-03-17T17:52:50.890170348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:50.890212 containerd[1749]: time="2025-03-17T17:52:50.890185828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.892040 containerd[1749]: time="2025-03-17T17:52:50.891222989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:50.902655 systemd[1]: Started cri-containerd-3d8113a7f41d22ef4bcf0b1147be6fd07cf646ba005b3c62e2ddd8f42016fcf9.scope - libcontainer container 3d8113a7f41d22ef4bcf0b1147be6fd07cf646ba005b3c62e2ddd8f42016fcf9. Mar 17 17:52:50.921870 systemd[1]: Started cri-containerd-54bccb662da17fb66813b92d3824f299da59f1406df2b074ef4e50de06528528.scope - libcontainer container 54bccb662da17fb66813b92d3824f299da59f1406df2b074ef4e50de06528528. Mar 17 17:52:50.926854 systemd[1]: Started cri-containerd-ad0cca60f398a4eb38e317b47025af420a676db6502e8495a19c3820e93b3194.scope - libcontainer container ad0cca60f398a4eb38e317b47025af420a676db6502e8495a19c3820e93b3194. Mar 17 17:52:50.957776 containerd[1749]: time="2025-03-17T17:52:50.956915898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.0-a-76d88708f5,Uid:4229d0629e935ef5f03f36b66fd6eaef,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d8113a7f41d22ef4bcf0b1147be6fd07cf646ba005b3c62e2ddd8f42016fcf9\"" Mar 17 17:52:50.961122 containerd[1749]: time="2025-03-17T17:52:50.961088302Z" level=info msg="CreateContainer within sandbox \"3d8113a7f41d22ef4bcf0b1147be6fd07cf646ba005b3c62e2ddd8f42016fcf9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:52:50.978089 containerd[1749]: time="2025-03-17T17:52:50.978054880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.0-a-76d88708f5,Uid:e30d485287c91a15a79e8f280854a32b,Namespace:kube-system,Attempt:0,} returns sandbox id \"54bccb662da17fb66813b92d3824f299da59f1406df2b074ef4e50de06528528\"" Mar 17 17:52:50.981156 containerd[1749]: time="2025-03-17T17:52:50.981063163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.0-a-76d88708f5,Uid:1b1eb155b686705a0cb518ed0019703c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad0cca60f398a4eb38e317b47025af420a676db6502e8495a19c3820e93b3194\"" Mar 17 17:52:50.983900 containerd[1749]: time="2025-03-17T17:52:50.983857046Z" level=info msg="CreateContainer within sandbox \"54bccb662da17fb66813b92d3824f299da59f1406df2b074ef4e50de06528528\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:52:50.984859 containerd[1749]: time="2025-03-17T17:52:50.984384407Z" level=info msg="CreateContainer within sandbox \"ad0cca60f398a4eb38e317b47025af420a676db6502e8495a19c3820e93b3194\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:52:51.385781 containerd[1749]: time="2025-03-17T17:52:51.385734868Z" level=info msg="CreateContainer within sandbox \"3d8113a7f41d22ef4bcf0b1147be6fd07cf646ba005b3c62e2ddd8f42016fcf9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"908a0388e6f7472c3c5e1834ae2cbbe6c639675f08b6f52de9649335a781cd5f\"" Mar 17 17:52:51.387436 containerd[1749]: time="2025-03-17T17:52:51.386323309Z" level=info msg="StartContainer for \"908a0388e6f7472c3c5e1834ae2cbbe6c639675f08b6f52de9649335a781cd5f\"" Mar 17 17:52:51.393065 containerd[1749]: time="2025-03-17T17:52:51.392940476Z" level=info msg="CreateContainer within sandbox \"54bccb662da17fb66813b92d3824f299da59f1406df2b074ef4e50de06528528\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4771a9e2657bbfa80d2bb0652eb94e3498897c06916dc74a3009b43c9470e33b\"" Mar 17 17:52:51.394042 containerd[1749]: time="2025-03-17T17:52:51.393993277Z" level=info msg="StartContainer for \"4771a9e2657bbfa80d2bb0652eb94e3498897c06916dc74a3009b43c9470e33b\"" Mar 17 17:52:51.399564 containerd[1749]: time="2025-03-17T17:52:51.399518802Z" level=info msg="CreateContainer within sandbox \"ad0cca60f398a4eb38e317b47025af420a676db6502e8495a19c3820e93b3194\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4ce465ed94b0316ecba3a09674dd3dfff37a2608302196c0d4b69784646a8e3\"" Mar 17 17:52:51.400108 containerd[1749]: time="2025-03-17T17:52:51.400076803Z" level=info msg="StartContainer for \"a4ce465ed94b0316ecba3a09674dd3dfff37a2608302196c0d4b69784646a8e3\"" Mar 17 17:52:51.410468 kubelet[2985]: E0317 17:52:51.410419 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.0-a-76d88708f5?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="3.2s" Mar 17 17:52:51.412647 systemd[1]: Started cri-containerd-908a0388e6f7472c3c5e1834ae2cbbe6c639675f08b6f52de9649335a781cd5f.scope - libcontainer container 908a0388e6f7472c3c5e1834ae2cbbe6c639675f08b6f52de9649335a781cd5f. Mar 17 17:52:51.438984 systemd[1]: Started cri-containerd-4771a9e2657bbfa80d2bb0652eb94e3498897c06916dc74a3009b43c9470e33b.scope - libcontainer container 4771a9e2657bbfa80d2bb0652eb94e3498897c06916dc74a3009b43c9470e33b. Mar 17 17:52:51.442771 systemd[1]: Started cri-containerd-a4ce465ed94b0316ecba3a09674dd3dfff37a2608302196c0d4b69784646a8e3.scope - libcontainer container a4ce465ed94b0316ecba3a09674dd3dfff37a2608302196c0d4b69784646a8e3. Mar 17 17:52:51.469523 containerd[1749]: time="2025-03-17T17:52:51.468795235Z" level=info msg="StartContainer for \"908a0388e6f7472c3c5e1834ae2cbbe6c639675f08b6f52de9649335a781cd5f\" returns successfully" Mar 17 17:52:51.510938 containerd[1749]: time="2025-03-17T17:52:51.509759998Z" level=info msg="StartContainer for \"a4ce465ed94b0316ecba3a09674dd3dfff37a2608302196c0d4b69784646a8e3\" returns successfully" Mar 17 17:52:51.516014 containerd[1749]: time="2025-03-17T17:52:51.515978484Z" level=info msg="StartContainer for \"4771a9e2657bbfa80d2bb0652eb94e3498897c06916dc74a3009b43c9470e33b\" returns successfully" Mar 17 17:52:51.588487 kubelet[2985]: I0317 17:52:51.587152 2985 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:53.775046 kubelet[2985]: I0317 17:52:53.775007 2985 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:53.775046 kubelet[2985]: E0317 17:52:53.775047 2985 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.1.0-a-76d88708f5\": node \"ci-4230.1.0-a-76d88708f5\" not found" Mar 17 17:52:54.394848 kubelet[2985]: I0317 17:52:54.394810 2985 apiserver.go:52] "Watching apiserver" Mar 17 17:52:54.408357 kubelet[2985]: I0317 17:52:54.408325 2985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:52:54.486736 kubelet[2985]: E0317 17:52:54.486700 2985 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.0-a-76d88708f5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:54.486865 kubelet[2985]: E0317 17:52:54.486668 2985 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:56.250242 systemd[1]: Reload requested from client PID 3263 ('systemctl') (unit session-9.scope)... Mar 17 17:52:56.250563 systemd[1]: Reloading... Mar 17 17:52:56.356508 zram_generator::config[3310]: No configuration found. Mar 17 17:52:56.465600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:56.585193 systemd[1]: Reloading finished in 334 ms. Mar 17 17:52:56.606711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:56.627848 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:56.628228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:56.628367 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 117.2M memory peak. Mar 17 17:52:56.633856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:56.799757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:56.809847 (kubelet)[3374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:56.858631 kubelet[3374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:56.858631 kubelet[3374]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:56.858631 kubelet[3374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:56.858949 kubelet[3374]: I0317 17:52:56.858610 3374 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:56.867286 kubelet[3374]: I0317 17:52:56.865302 3374 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:52:56.867286 kubelet[3374]: I0317 17:52:56.865328 3374 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:56.867286 kubelet[3374]: I0317 17:52:56.865563 3374 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:52:56.867286 kubelet[3374]: I0317 17:52:56.866887 3374 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:52:56.871318 kubelet[3374]: I0317 17:52:56.871240 3374 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:56.876145 kubelet[3374]: E0317 17:52:56.876117 3374 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:52:56.876274 kubelet[3374]: I0317 17:52:56.876262 3374 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:52:56.880624 kubelet[3374]: I0317 17:52:56.880602 3374 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:56.880877 kubelet[3374]: I0317 17:52:56.880863 3374 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:52:56.881079 kubelet[3374]: I0317 17:52:56.881052 3374 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:56.881316 kubelet[3374]: I0317 17:52:56.881141 3374 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.0-a-76d88708f5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:52:56.881443 kubelet[3374]: I0317 17:52:56.881430 3374 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:56.881527 kubelet[3374]: I0317 17:52:56.881517 3374 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:52:56.881615 kubelet[3374]: I0317 17:52:56.881606 3374 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:56.881788 kubelet[3374]: I0317 17:52:56.881778 3374 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:52:56.882605 kubelet[3374]: I0317 17:52:56.882589 3374 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:56.882734 kubelet[3374]: I0317 17:52:56.882723 3374 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:52:56.882791 kubelet[3374]: I0317 17:52:56.882783 3374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:56.885860 kubelet[3374]: I0317 17:52:56.885818 3374 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:56.886319 kubelet[3374]: I0317 17:52:56.886289 3374 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:56.888177 kubelet[3374]: I0317 17:52:56.888159 3374 server.go:1269] "Started kubelet" Mar 17 17:52:56.890590 kubelet[3374]: I0317 17:52:56.890561 3374 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:56.892079 kubelet[3374]: I0317 17:52:56.892058 3374 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:52:56.893039 kubelet[3374]: I0317 17:52:56.892991 3374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:56.893279 kubelet[3374]: I0317 17:52:56.893265 3374 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:56.895549 kubelet[3374]: I0317 17:52:56.890100 3374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:56.900457 kubelet[3374]: I0317 17:52:56.890208 3374 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:52:56.900696 kubelet[3374]: I0317 17:52:56.900681 3374 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:52:56.901002 kubelet[3374]: E0317 17:52:56.900983 3374 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.0-a-76d88708f5\" not found" Mar 17 17:52:56.912921 kubelet[3374]: I0317 17:52:56.912887 3374 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:52:56.913202 kubelet[3374]: I0317 17:52:56.913190 3374 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:56.916303 kubelet[3374]: I0317 17:52:56.916271 3374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:56.917346 kubelet[3374]: I0317 17:52:56.917326 3374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:56.917443 kubelet[3374]: I0317 17:52:56.917433 3374 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:56.918034 kubelet[3374]: I0317 17:52:56.917725 3374 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:52:56.918034 kubelet[3374]: E0317 17:52:56.917779 3374 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:56.926365 kubelet[3374]: I0317 17:52:56.926337 3374 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:56.926683 kubelet[3374]: I0317 17:52:56.926662 3374 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:56.934614 kubelet[3374]: I0317 17:52:56.934588 3374 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:56.979879 kubelet[3374]: I0317 17:52:56.979711 3374 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:56.979879 kubelet[3374]: I0317 17:52:56.979731 3374 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:56.979879 kubelet[3374]: I0317 17:52:56.979752 3374 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:56.980070 kubelet[3374]: I0317 17:52:56.979985 3374 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:52:56.980070 kubelet[3374]: I0317 17:52:56.979998 3374 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:52:56.980070 kubelet[3374]: I0317 17:52:56.980015 3374 policy_none.go:49] "None policy: Start" Mar 17 17:52:56.980668 kubelet[3374]: I0317 17:52:56.980610 3374 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:56.980668 kubelet[3374]: I0317 17:52:56.980631 3374 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:56.981200 kubelet[3374]: I0317 17:52:56.981144 3374 state_mem.go:75] "Updated machine memory state" Mar 17 17:52:56.985727 kubelet[3374]: I0317 17:52:56.985679 3374 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:56.985849 kubelet[3374]: I0317 17:52:56.985828 3374 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:52:56.985887 kubelet[3374]: I0317 17:52:56.985848 3374 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:56.986335 kubelet[3374]: I0317 17:52:56.986311 3374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:57.028177 kubelet[3374]: W0317 17:52:57.028143 3374 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:57.033192 kubelet[3374]: W0317 17:52:57.032956 3374 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:57.033312 kubelet[3374]: W0317 17:52:57.033268 3374 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:57.088447 kubelet[3374]: I0317 17:52:57.088412 3374 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.106730 kubelet[3374]: I0317 17:52:57.106693 3374 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.106866 kubelet[3374]: I0317 17:52:57.106788 3374 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214620 kubelet[3374]: I0317 17:52:57.214588 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-ca-certs\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214846 kubelet[3374]: I0317 17:52:57.214623 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214846 kubelet[3374]: I0317 17:52:57.214643 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-ca-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214846 kubelet[3374]: I0317 17:52:57.214659 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214846 kubelet[3374]: I0317 17:52:57.214674 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214846 kubelet[3374]: I0317 17:52:57.214689 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214961 kubelet[3374]: I0317 17:52:57.214704 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4229d0629e935ef5f03f36b66fd6eaef-kubeconfig\") pod \"kube-scheduler-ci-4230.1.0-a-76d88708f5\" (UID: \"4229d0629e935ef5f03f36b66fd6eaef\") " pod="kube-system/kube-scheduler-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214961 kubelet[3374]: I0317 17:52:57.214720 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e30d485287c91a15a79e8f280854a32b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" (UID: \"e30d485287c91a15a79e8f280854a32b\") " pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.214961 kubelet[3374]: I0317 17:52:57.214739 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1eb155b686705a0cb518ed0019703c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.0-a-76d88708f5\" (UID: \"1b1eb155b686705a0cb518ed0019703c\") " pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:57.260798 sudo[3405]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:52:57.261381 sudo[3405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:52:57.723372 sudo[3405]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:57.885624 kubelet[3374]: I0317 17:52:57.885588 3374 apiserver.go:52] "Watching apiserver" Mar 17 17:52:57.913971 kubelet[3374]: I0317 17:52:57.913911 3374 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:52:57.981026 kubelet[3374]: W0317 17:52:57.980762 3374 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:52:57.981026 kubelet[3374]: E0317 17:52:57.980839 3374 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.0-a-76d88708f5\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" Mar 17 17:52:58.014412 kubelet[3374]: I0317 17:52:58.014184 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.0-a-76d88708f5" podStartSLOduration=1.014168974 podStartE2EDuration="1.014168974s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:58.013038293 +0000 UTC m=+1.199521818" watchObservedRunningTime="2025-03-17 17:52:58.014168974 +0000 UTC m=+1.200652499" Mar 17 17:52:58.014412 kubelet[3374]: I0317 17:52:58.014285 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.0-a-76d88708f5" podStartSLOduration=1.014279294 podStartE2EDuration="1.014279294s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:58.001223561 +0000 UTC m=+1.187707086" watchObservedRunningTime="2025-03-17 17:52:58.014279294 +0000 UTC m=+1.200762779" Mar 17 17:52:58.026181 kubelet[3374]: I0317 17:52:58.025518 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.0-a-76d88708f5" podStartSLOduration=1.025500906 podStartE2EDuration="1.025500906s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:58.024523545 +0000 UTC m=+1.211007070" watchObservedRunningTime="2025-03-17 17:52:58.025500906 +0000 UTC m=+1.211984431" Mar 17 17:52:59.477817 sudo[2433]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:59.558804 sshd[2432]: Connection closed by 10.200.16.10 port 34176 Mar 17 17:52:59.559329 sshd-session[2430]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:59.562724 systemd[1]: sshd@6-10.200.20.19:22-10.200.16.10:34176.service: Deactivated successfully. Mar 17 17:52:59.564593 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:52:59.564760 systemd[1]: session-9.scope: Consumed 7.476s CPU time, 258M memory peak. Mar 17 17:52:59.566281 systemd-logind[1714]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:52:59.568157 systemd-logind[1714]: Removed session 9. Mar 17 17:53:01.851253 kubelet[3374]: I0317 17:53:01.851181 3374 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:53:01.851645 containerd[1749]: time="2025-03-17T17:53:01.851592640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:53:01.852104 kubelet[3374]: I0317 17:53:01.852086 3374 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:53:02.958041 systemd[1]: Created slice kubepods-besteffort-podd420456d_71c1_4486_9115_004b7e0d002e.slice - libcontainer container kubepods-besteffort-podd420456d_71c1_4486_9115_004b7e0d002e.slice. Mar 17 17:53:02.976323 systemd[1]: Created slice kubepods-burstable-podce310019_ee05_46cf_a81e_ca102e7a26aa.slice - libcontainer container kubepods-burstable-podce310019_ee05_46cf_a81e_ca102e7a26aa.slice. Mar 17 17:53:03.048380 systemd[1]: Created slice kubepods-besteffort-podc4171d92_9c65_42b8_aa9b_a80fd78cb1fd.slice - libcontainer container kubepods-besteffort-podc4171d92_9c65_42b8_aa9b_a80fd78cb1fd.slice. Mar 17 17:53:03.051128 kubelet[3374]: I0317 17:53:03.050712 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-lib-modules\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051128 kubelet[3374]: I0317 17:53:03.050753 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce310019-ee05-46cf-a81e-ca102e7a26aa-clustermesh-secrets\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051128 kubelet[3374]: I0317 17:53:03.050771 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-kernel\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051128 kubelet[3374]: I0317 17:53:03.050787 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpln\" (UniqueName: \"kubernetes.io/projected/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-kube-api-access-dnpln\") pod \"cilium-operator-5d85765b45-l8gcq\" (UID: \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\") " pod="kube-system/cilium-operator-5d85765b45-l8gcq" Mar 17 17:53:03.051128 kubelet[3374]: I0317 17:53:03.050806 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d420456d-71c1-4486-9115-004b7e0d002e-xtables-lock\") pod \"kube-proxy-bmr69\" (UID: \"d420456d-71c1-4486-9115-004b7e0d002e\") " pod="kube-system/kube-proxy-bmr69" Mar 17 17:53:03.051599 kubelet[3374]: I0317 17:53:03.050819 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-run\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051599 kubelet[3374]: I0317 17:53:03.050833 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-net\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051599 kubelet[3374]: I0317 17:53:03.050846 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-bpf-maps\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051599 kubelet[3374]: I0317 17:53:03.050860 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-xtables-lock\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051599 kubelet[3374]: I0317 17:53:03.050877 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-cilium-config-path\") pod \"cilium-operator-5d85765b45-l8gcq\" (UID: \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\") " pod="kube-system/cilium-operator-5d85765b45-l8gcq" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050894 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-cgroup\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050908 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-hostproc\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050924 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-etc-cni-netd\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050940 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d420456d-71c1-4486-9115-004b7e0d002e-kube-proxy\") pod \"kube-proxy-bmr69\" (UID: \"d420456d-71c1-4486-9115-004b7e0d002e\") " pod="kube-system/kube-proxy-bmr69" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050955 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dmr\" (UniqueName: \"kubernetes.io/projected/d420456d-71c1-4486-9115-004b7e0d002e-kube-api-access-p5dmr\") pod \"kube-proxy-bmr69\" (UID: \"d420456d-71c1-4486-9115-004b7e0d002e\") " pod="kube-system/kube-proxy-bmr69" Mar 17 17:53:03.051710 kubelet[3374]: I0317 17:53:03.050990 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cni-path\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051824 kubelet[3374]: I0317 17:53:03.051017 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-hubble-tls\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051824 kubelet[3374]: I0317 17:53:03.051034 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d420456d-71c1-4486-9115-004b7e0d002e-lib-modules\") pod \"kube-proxy-bmr69\" (UID: \"d420456d-71c1-4486-9115-004b7e0d002e\") " pod="kube-system/kube-proxy-bmr69" Mar 17 17:53:03.051824 kubelet[3374]: I0317 17:53:03.051049 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-config-path\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.051824 kubelet[3374]: I0317 17:53:03.051066 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5wv\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-kube-api-access-cg5wv\") pod \"cilium-lzlfb\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " pod="kube-system/cilium-lzlfb" Mar 17 17:53:03.268432 containerd[1749]: time="2025-03-17T17:53:03.268323052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmr69,Uid:d420456d-71c1-4486-9115-004b7e0d002e,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:03.281644 containerd[1749]: time="2025-03-17T17:53:03.281516186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lzlfb,Uid:ce310019-ee05-46cf-a81e-ca102e7a26aa,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:03.321305 containerd[1749]: time="2025-03-17T17:53:03.321167187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:03.321305 containerd[1749]: time="2025-03-17T17:53:03.321223627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:03.321305 containerd[1749]: time="2025-03-17T17:53:03.321239067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.321988 containerd[1749]: time="2025-03-17T17:53:03.321332227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.339647 systemd[1]: Started cri-containerd-7fb643e69586feffc27d319a618fa5122eb6879f195f3f740c8c68b86dc36e08.scope - libcontainer container 7fb643e69586feffc27d319a618fa5122eb6879f195f3f740c8c68b86dc36e08. Mar 17 17:53:03.350757 containerd[1749]: time="2025-03-17T17:53:03.350662897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:03.350757 containerd[1749]: time="2025-03-17T17:53:03.350718257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:03.350953 containerd[1749]: time="2025-03-17T17:53:03.350732737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.350953 containerd[1749]: time="2025-03-17T17:53:03.350791937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.352176 containerd[1749]: time="2025-03-17T17:53:03.352140218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l8gcq,Uid:c4171d92-9c65-42b8-aa9b-a80fd78cb1fd,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:03.366518 containerd[1749]: time="2025-03-17T17:53:03.366444433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmr69,Uid:d420456d-71c1-4486-9115-004b7e0d002e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fb643e69586feffc27d319a618fa5122eb6879f195f3f740c8c68b86dc36e08\"" Mar 17 17:53:03.373701 systemd[1]: Started cri-containerd-df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9.scope - libcontainer container df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9. Mar 17 17:53:03.374980 containerd[1749]: time="2025-03-17T17:53:03.374943882Z" level=info msg="CreateContainer within sandbox \"7fb643e69586feffc27d319a618fa5122eb6879f195f3f740c8c68b86dc36e08\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:53:03.401805 containerd[1749]: time="2025-03-17T17:53:03.401756229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lzlfb,Uid:ce310019-ee05-46cf-a81e-ca102e7a26aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\"" Mar 17 17:53:03.403848 containerd[1749]: time="2025-03-17T17:53:03.403677231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:53:03.418983 containerd[1749]: time="2025-03-17T17:53:03.418661767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:03.418983 containerd[1749]: time="2025-03-17T17:53:03.418716647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:03.418983 containerd[1749]: time="2025-03-17T17:53:03.418728647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.418983 containerd[1749]: time="2025-03-17T17:53:03.418802647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.432649 systemd[1]: Started cri-containerd-aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480.scope - libcontainer container aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480. Mar 17 17:53:03.436793 containerd[1749]: time="2025-03-17T17:53:03.436746545Z" level=info msg="CreateContainer within sandbox \"7fb643e69586feffc27d319a618fa5122eb6879f195f3f740c8c68b86dc36e08\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2eda2a83dcf3321e326c0e90809e42b7991172bc022aca906abad3f0655fcee\"" Mar 17 17:53:03.440498 containerd[1749]: time="2025-03-17T17:53:03.437636746Z" level=info msg="StartContainer for \"b2eda2a83dcf3321e326c0e90809e42b7991172bc022aca906abad3f0655fcee\"" Mar 17 17:53:03.467651 systemd[1]: Started cri-containerd-b2eda2a83dcf3321e326c0e90809e42b7991172bc022aca906abad3f0655fcee.scope - libcontainer container b2eda2a83dcf3321e326c0e90809e42b7991172bc022aca906abad3f0655fcee. Mar 17 17:53:03.472360 containerd[1749]: time="2025-03-17T17:53:03.472322542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l8gcq,Uid:c4171d92-9c65-42b8-aa9b-a80fd78cb1fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\"" Mar 17 17:53:03.499302 containerd[1749]: time="2025-03-17T17:53:03.499245850Z" level=info msg="StartContainer for \"b2eda2a83dcf3321e326c0e90809e42b7991172bc022aca906abad3f0655fcee\" returns successfully" Mar 17 17:53:04.802406 kubelet[3374]: I0317 17:53:04.802349 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bmr69" podStartSLOduration=2.80233235 podStartE2EDuration="2.80233235s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:03.992367757 +0000 UTC m=+7.178851282" watchObservedRunningTime="2025-03-17 17:53:04.80233235 +0000 UTC m=+7.988815875" Mar 17 17:53:13.397119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889131154.mount: Deactivated successfully. Mar 17 17:53:15.687682 containerd[1749]: time="2025-03-17T17:53:15.687614969Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:15.690441 containerd[1749]: time="2025-03-17T17:53:15.690405092Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:53:15.694248 containerd[1749]: time="2025-03-17T17:53:15.694213416Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:15.695987 containerd[1749]: time="2025-03-17T17:53:15.695536297Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.291824186s" Mar 17 17:53:15.695987 containerd[1749]: time="2025-03-17T17:53:15.695572377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:53:15.697438 containerd[1749]: time="2025-03-17T17:53:15.697257818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:53:15.698462 containerd[1749]: time="2025-03-17T17:53:15.698330659Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:53:15.738456 containerd[1749]: time="2025-03-17T17:53:15.738386256Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\"" Mar 17 17:53:15.738869 containerd[1749]: time="2025-03-17T17:53:15.738837857Z" level=info msg="StartContainer for \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\"" Mar 17 17:53:15.766643 systemd[1]: Started cri-containerd-4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2.scope - libcontainer container 4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2. Mar 17 17:53:15.790273 containerd[1749]: time="2025-03-17T17:53:15.790221864Z" level=info msg="StartContainer for \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\" returns successfully" Mar 17 17:53:15.794372 systemd[1]: cri-containerd-4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2.scope: Deactivated successfully. Mar 17 17:53:16.725374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2-rootfs.mount: Deactivated successfully. Mar 17 17:53:16.889440 containerd[1749]: time="2025-03-17T17:53:16.889230518Z" level=info msg="shim disconnected" id=4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2 namespace=k8s.io Mar 17 17:53:16.889440 containerd[1749]: time="2025-03-17T17:53:16.889301398Z" level=warning msg="cleaning up after shim disconnected" id=4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2 namespace=k8s.io Mar 17 17:53:16.889440 containerd[1749]: time="2025-03-17T17:53:16.889309238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:17.004212 containerd[1749]: time="2025-03-17T17:53:17.003839624Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:53:17.036032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935521731.mount: Deactivated successfully. Mar 17 17:53:17.043009 containerd[1749]: time="2025-03-17T17:53:17.042941740Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\"" Mar 17 17:53:17.044445 containerd[1749]: time="2025-03-17T17:53:17.043617261Z" level=info msg="StartContainer for \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\"" Mar 17 17:53:17.069604 systemd[1]: Started cri-containerd-f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd.scope - libcontainer container f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd. Mar 17 17:53:17.096050 containerd[1749]: time="2025-03-17T17:53:17.095961069Z" level=info msg="StartContainer for \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\" returns successfully" Mar 17 17:53:17.105141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:53:17.105451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:53:17.105726 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:53:17.108840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:53:17.109000 systemd[1]: cri-containerd-f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd.scope: Deactivated successfully. Mar 17 17:53:17.128579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:53:17.139502 containerd[1749]: time="2025-03-17T17:53:17.139334149Z" level=info msg="shim disconnected" id=f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd namespace=k8s.io Mar 17 17:53:17.139502 containerd[1749]: time="2025-03-17T17:53:17.139383709Z" level=warning msg="cleaning up after shim disconnected" id=f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd namespace=k8s.io Mar 17 17:53:17.139502 containerd[1749]: time="2025-03-17T17:53:17.139391949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:17.725878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd-rootfs.mount: Deactivated successfully. Mar 17 17:53:17.921742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435996021.mount: Deactivated successfully. Mar 17 17:53:18.010443 containerd[1749]: time="2025-03-17T17:53:18.010333313Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:53:18.147885 containerd[1749]: time="2025-03-17T17:53:18.147639040Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\"" Mar 17 17:53:18.149501 containerd[1749]: time="2025-03-17T17:53:18.148447920Z" level=info msg="StartContainer for \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\"" Mar 17 17:53:18.185814 systemd[1]: Started cri-containerd-6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7.scope - libcontainer container 6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7. Mar 17 17:53:18.225431 systemd[1]: cri-containerd-6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7.scope: Deactivated successfully. Mar 17 17:53:18.228460 containerd[1749]: time="2025-03-17T17:53:18.228427594Z" level=info msg="StartContainer for \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\" returns successfully" Mar 17 17:53:18.362980 containerd[1749]: time="2025-03-17T17:53:18.362712838Z" level=info msg="shim disconnected" id=6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7 namespace=k8s.io Mar 17 17:53:18.362980 containerd[1749]: time="2025-03-17T17:53:18.362766238Z" level=warning msg="cleaning up after shim disconnected" id=6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7 namespace=k8s.io Mar 17 17:53:18.362980 containerd[1749]: time="2025-03-17T17:53:18.362773838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:18.607566 containerd[1749]: time="2025-03-17T17:53:18.607518424Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:18.611173 containerd[1749]: time="2025-03-17T17:53:18.611130467Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:53:18.615611 containerd[1749]: time="2025-03-17T17:53:18.615493911Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:18.617169 containerd[1749]: time="2025-03-17T17:53:18.616752913Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.919462735s" Mar 17 17:53:18.617169 containerd[1749]: time="2025-03-17T17:53:18.616786473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:53:18.619754 containerd[1749]: time="2025-03-17T17:53:18.619610195Z" level=info msg="CreateContainer within sandbox \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:53:18.657729 containerd[1749]: time="2025-03-17T17:53:18.657682870Z" level=info msg="CreateContainer within sandbox \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\"" Mar 17 17:53:18.658140 containerd[1749]: time="2025-03-17T17:53:18.658111991Z" level=info msg="StartContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\"" Mar 17 17:53:18.681618 systemd[1]: Started cri-containerd-aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675.scope - libcontainer container aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675. Mar 17 17:53:18.704609 containerd[1749]: time="2025-03-17T17:53:18.704512594Z" level=info msg="StartContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" returns successfully" Mar 17 17:53:18.728437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7-rootfs.mount: Deactivated successfully. Mar 17 17:53:19.014901 containerd[1749]: time="2025-03-17T17:53:19.014698040Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:53:19.058863 kubelet[3374]: I0317 17:53:19.057638 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-l8gcq" podStartSLOduration=1.9135339089999999 podStartE2EDuration="17.057620239s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:03.473455983 +0000 UTC m=+6.659939508" lastFinishedPulling="2025-03-17 17:53:18.617542313 +0000 UTC m=+21.804025838" observedRunningTime="2025-03-17 17:53:19.054874637 +0000 UTC m=+22.241358162" watchObservedRunningTime="2025-03-17 17:53:19.057620239 +0000 UTC m=+22.244103764" Mar 17 17:53:19.064841 containerd[1749]: time="2025-03-17T17:53:19.064704206Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\"" Mar 17 17:53:19.065932 containerd[1749]: time="2025-03-17T17:53:19.065711727Z" level=info msg="StartContainer for \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\"" Mar 17 17:53:19.109416 systemd[1]: Started cri-containerd-bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd.scope - libcontainer container bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd. Mar 17 17:53:19.163503 systemd[1]: cri-containerd-bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd.scope: Deactivated successfully. Mar 17 17:53:19.166940 containerd[1749]: time="2025-03-17T17:53:19.166840460Z" level=info msg="StartContainer for \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\" returns successfully" Mar 17 17:53:19.198766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd-rootfs.mount: Deactivated successfully. Mar 17 17:53:19.384462 containerd[1749]: time="2025-03-17T17:53:19.384292941Z" level=info msg="shim disconnected" id=bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd namespace=k8s.io Mar 17 17:53:19.384462 containerd[1749]: time="2025-03-17T17:53:19.384348141Z" level=warning msg="cleaning up after shim disconnected" id=bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd namespace=k8s.io Mar 17 17:53:19.384462 containerd[1749]: time="2025-03-17T17:53:19.384358741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:20.020327 containerd[1749]: time="2025-03-17T17:53:20.020227808Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:53:20.065964 containerd[1749]: time="2025-03-17T17:53:20.065925050Z" level=info msg="CreateContainer within sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\"" Mar 17 17:53:20.066652 containerd[1749]: time="2025-03-17T17:53:20.066627891Z" level=info msg="StartContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\"" Mar 17 17:53:20.094696 systemd[1]: Started cri-containerd-2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5.scope - libcontainer container 2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5. Mar 17 17:53:20.122014 containerd[1749]: time="2025-03-17T17:53:20.121966022Z" level=info msg="StartContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" returns successfully" Mar 17 17:53:20.291032 kubelet[3374]: I0317 17:53:20.290760 3374 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:53:20.334949 systemd[1]: Created slice kubepods-burstable-pod439c8471_11d3_484b_b1fa_19ecdebdcae2.slice - libcontainer container kubepods-burstable-pod439c8471_11d3_484b_b1fa_19ecdebdcae2.slice. Mar 17 17:53:20.346375 systemd[1]: Created slice kubepods-burstable-pode8c230cb_8a01_4761_936f_95b05907b21b.slice - libcontainer container kubepods-burstable-pode8c230cb_8a01_4761_936f_95b05907b21b.slice. Mar 17 17:53:20.364757 kubelet[3374]: I0317 17:53:20.364728 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xpj\" (UniqueName: \"kubernetes.io/projected/439c8471-11d3-484b-b1fa-19ecdebdcae2-kube-api-access-b6xpj\") pod \"coredns-6f6b679f8f-bxhh6\" (UID: \"439c8471-11d3-484b-b1fa-19ecdebdcae2\") " pod="kube-system/coredns-6f6b679f8f-bxhh6" Mar 17 17:53:20.365009 kubelet[3374]: I0317 17:53:20.364915 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c230cb-8a01-4761-936f-95b05907b21b-config-volume\") pod \"coredns-6f6b679f8f-6vwhw\" (UID: \"e8c230cb-8a01-4761-936f-95b05907b21b\") " pod="kube-system/coredns-6f6b679f8f-6vwhw" Mar 17 17:53:20.365009 kubelet[3374]: I0317 17:53:20.364941 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4f2d\" (UniqueName: \"kubernetes.io/projected/e8c230cb-8a01-4761-936f-95b05907b21b-kube-api-access-j4f2d\") pod \"coredns-6f6b679f8f-6vwhw\" (UID: \"e8c230cb-8a01-4761-936f-95b05907b21b\") " pod="kube-system/coredns-6f6b679f8f-6vwhw" Mar 17 17:53:20.365009 kubelet[3374]: I0317 17:53:20.364973 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/439c8471-11d3-484b-b1fa-19ecdebdcae2-config-volume\") pod \"coredns-6f6b679f8f-bxhh6\" (UID: \"439c8471-11d3-484b-b1fa-19ecdebdcae2\") " pod="kube-system/coredns-6f6b679f8f-bxhh6" Mar 17 17:53:20.641060 containerd[1749]: time="2025-03-17T17:53:20.640827941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxhh6,Uid:439c8471-11d3-484b-b1fa-19ecdebdcae2,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:20.651575 containerd[1749]: time="2025-03-17T17:53:20.651522950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6vwhw,Uid:e8c230cb-8a01-4761-936f-95b05907b21b,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:21.039615 kubelet[3374]: I0317 17:53:21.039557 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lzlfb" podStartSLOduration=6.746099561 podStartE2EDuration="19.039538788s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:03.403052431 +0000 UTC m=+6.589535916" lastFinishedPulling="2025-03-17 17:53:15.696491658 +0000 UTC m=+18.882975143" observedRunningTime="2025-03-17 17:53:21.038228827 +0000 UTC m=+24.224712392" watchObservedRunningTime="2025-03-17 17:53:21.039538788 +0000 UTC m=+24.226022273" Mar 17 17:53:22.169488 systemd-networkd[1432]: cilium_host: Link UP Mar 17 17:53:22.169630 systemd-networkd[1432]: cilium_net: Link UP Mar 17 17:53:22.169744 systemd-networkd[1432]: cilium_net: Gained carrier Mar 17 17:53:22.169847 systemd-networkd[1432]: cilium_host: Gained carrier Mar 17 17:53:22.278617 systemd-networkd[1432]: cilium_net: Gained IPv6LL Mar 17 17:53:22.338692 systemd-networkd[1432]: cilium_vxlan: Link UP Mar 17 17:53:22.338699 systemd-networkd[1432]: cilium_vxlan: Gained carrier Mar 17 17:53:22.461633 systemd-networkd[1432]: cilium_host: Gained IPv6LL Mar 17 17:53:22.587718 kernel: NET: Registered PF_ALG protocol family Mar 17 17:53:23.239529 systemd-networkd[1432]: lxc_health: Link UP Mar 17 17:53:23.242666 systemd-networkd[1432]: lxc_health: Gained carrier Mar 17 17:53:23.704753 systemd-networkd[1432]: lxc0e24e1f8fd95: Link UP Mar 17 17:53:23.716514 kernel: eth0: renamed from tmpb675c Mar 17 17:53:23.724736 systemd-networkd[1432]: lxc0e24e1f8fd95: Gained carrier Mar 17 17:53:23.735462 systemd-networkd[1432]: lxc651810037ce8: Link UP Mar 17 17:53:23.749527 kernel: eth0: renamed from tmpb6997 Mar 17 17:53:23.755735 systemd-networkd[1432]: lxc651810037ce8: Gained carrier Mar 17 17:53:24.253641 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Mar 17 17:53:24.765763 systemd-networkd[1432]: lxc0e24e1f8fd95: Gained IPv6LL Mar 17 17:53:24.766019 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 17 17:53:25.277672 systemd-networkd[1432]: lxc651810037ce8: Gained IPv6LL Mar 17 17:53:27.314217 containerd[1749]: time="2025-03-17T17:53:27.306229720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:27.314217 containerd[1749]: time="2025-03-17T17:53:27.306892921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:27.314217 containerd[1749]: time="2025-03-17T17:53:27.306911801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:27.314217 containerd[1749]: time="2025-03-17T17:53:27.307023521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:27.336419 systemd[1]: Started cri-containerd-b675c726dcc96664bed034bf7076b6c14d848aedcdab05e6b432f02111099f3c.scope - libcontainer container b675c726dcc96664bed034bf7076b6c14d848aedcdab05e6b432f02111099f3c. Mar 17 17:53:27.340361 containerd[1749]: time="2025-03-17T17:53:27.339591992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:27.343028 containerd[1749]: time="2025-03-17T17:53:27.340143112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:27.343028 containerd[1749]: time="2025-03-17T17:53:27.340195872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:27.343028 containerd[1749]: time="2025-03-17T17:53:27.342112074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:27.374627 systemd[1]: Started cri-containerd-b6997a70709bdddfdd749aae85e1bf054cf443adf9816b840753fdaa5a770b10.scope - libcontainer container b6997a70709bdddfdd749aae85e1bf054cf443adf9816b840753fdaa5a770b10. Mar 17 17:53:27.419095 containerd[1749]: time="2025-03-17T17:53:27.418974746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxhh6,Uid:439c8471-11d3-484b-b1fa-19ecdebdcae2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b675c726dcc96664bed034bf7076b6c14d848aedcdab05e6b432f02111099f3c\"" Mar 17 17:53:27.423898 containerd[1749]: time="2025-03-17T17:53:27.423741911Z" level=info msg="CreateContainer within sandbox \"b675c726dcc96664bed034bf7076b6c14d848aedcdab05e6b432f02111099f3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:27.430624 containerd[1749]: time="2025-03-17T17:53:27.430570837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6vwhw,Uid:e8c230cb-8a01-4761-936f-95b05907b21b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6997a70709bdddfdd749aae85e1bf054cf443adf9816b840753fdaa5a770b10\"" Mar 17 17:53:27.438692 containerd[1749]: time="2025-03-17T17:53:27.438383604Z" level=info msg="CreateContainer within sandbox \"b6997a70709bdddfdd749aae85e1bf054cf443adf9816b840753fdaa5a770b10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:27.470690 containerd[1749]: time="2025-03-17T17:53:27.470640475Z" level=info msg="CreateContainer within sandbox \"b675c726dcc96664bed034bf7076b6c14d848aedcdab05e6b432f02111099f3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e28c74a887aa388cc450ee3d291c4d25ca4b8876e67847f4ca3e6a9d2869994d\"" Mar 17 17:53:27.471179 containerd[1749]: time="2025-03-17T17:53:27.471155555Z" level=info msg="StartContainer for \"e28c74a887aa388cc450ee3d291c4d25ca4b8876e67847f4ca3e6a9d2869994d\"" Mar 17 17:53:27.487070 containerd[1749]: time="2025-03-17T17:53:27.486922210Z" level=info msg="CreateContainer within sandbox \"b6997a70709bdddfdd749aae85e1bf054cf443adf9816b840753fdaa5a770b10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0e73b6a5ef6d23767763de02ad0dc199829bf6ec6d3c536a2bb3730cef76cb3\"" Mar 17 17:53:27.488444 containerd[1749]: time="2025-03-17T17:53:27.488265331Z" level=info msg="StartContainer for \"e0e73b6a5ef6d23767763de02ad0dc199829bf6ec6d3c536a2bb3730cef76cb3\"" Mar 17 17:53:27.498061 systemd[1]: Started cri-containerd-e28c74a887aa388cc450ee3d291c4d25ca4b8876e67847f4ca3e6a9d2869994d.scope - libcontainer container e28c74a887aa388cc450ee3d291c4d25ca4b8876e67847f4ca3e6a9d2869994d. Mar 17 17:53:27.527659 systemd[1]: Started cri-containerd-e0e73b6a5ef6d23767763de02ad0dc199829bf6ec6d3c536a2bb3730cef76cb3.scope - libcontainer container e0e73b6a5ef6d23767763de02ad0dc199829bf6ec6d3c536a2bb3730cef76cb3. Mar 17 17:53:27.535030 containerd[1749]: time="2025-03-17T17:53:27.534812495Z" level=info msg="StartContainer for \"e28c74a887aa388cc450ee3d291c4d25ca4b8876e67847f4ca3e6a9d2869994d\" returns successfully" Mar 17 17:53:27.569726 containerd[1749]: time="2025-03-17T17:53:27.569126727Z" level=info msg="StartContainer for \"e0e73b6a5ef6d23767763de02ad0dc199829bf6ec6d3c536a2bb3730cef76cb3\" returns successfully" Mar 17 17:53:28.053569 kubelet[3374]: I0317 17:53:28.053440 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bxhh6" podStartSLOduration=26.053418903 podStartE2EDuration="26.053418903s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:28.051678581 +0000 UTC m=+31.238162106" watchObservedRunningTime="2025-03-17 17:53:28.053418903 +0000 UTC m=+31.239902428" Mar 17 17:53:28.102365 kubelet[3374]: I0317 17:53:28.101729 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6vwhw" podStartSLOduration=26.101714588 podStartE2EDuration="26.101714588s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:28.099614266 +0000 UTC m=+31.286097791" watchObservedRunningTime="2025-03-17 17:53:28.101714588 +0000 UTC m=+31.288198113" Mar 17 17:53:28.312681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102974558.mount: Deactivated successfully. Mar 17 17:53:29.357746 systemd[1]: Started sshd@7-10.200.20.19:22-43.248.108.39:55757.service - OpenSSH per-connection server daemon (43.248.108.39:55757). Mar 17 17:53:29.946781 systemd[1]: Started sshd@8-10.200.20.19:22-60.13.138.157:64326.service - OpenSSH per-connection server daemon (60.13.138.157:64326). Mar 17 17:53:31.821362 sshd[4742]: Connection closed by 43.248.108.39 port 55757 Mar 17 17:53:31.822035 systemd[1]: sshd@7-10.200.20.19:22-43.248.108.39:55757.service: Deactivated successfully. Mar 17 17:53:38.766067 waagent[1961]: 2025-03-17T17:53:38.766005Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 17:53:38.772895 waagent[1961]: 2025-03-17T17:53:38.772846Z INFO ExtHandler Mar 17 17:53:38.772998 waagent[1961]: 2025-03-17T17:53:38.772963Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2697856f-3355-472b-bef4-3bce6e8369ea eTag: 12059274951190978756 source: Fabric] Mar 17 17:53:38.773336 waagent[1961]: 2025-03-17T17:53:38.773295Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:53:38.773932 waagent[1961]: 2025-03-17T17:53:38.773888Z INFO ExtHandler Mar 17 17:53:38.774002 waagent[1961]: 2025-03-17T17:53:38.773973Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 17:53:38.851746 waagent[1961]: 2025-03-17T17:53:38.851701Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:53:39.022193 waagent[1961]: 2025-03-17T17:53:39.022038Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E3CDC505828B4E7854870D074B05B9B45D07777A', 'hasPrivateKey': True} Mar 17 17:53:39.022594 waagent[1961]: 2025-03-17T17:53:39.022458Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E034D2A4A3C4844296EFF3F33786176FE5DCF845', 'hasPrivateKey': False} Mar 17 17:53:39.022932 waagent[1961]: 2025-03-17T17:53:39.022887Z INFO ExtHandler Fetch goal state completed Mar 17 17:53:39.023330 waagent[1961]: 2025-03-17T17:53:39.023283Z INFO ExtHandler ExtHandler Mar 17 17:53:39.023425 waagent[1961]: 2025-03-17T17:53:39.023380Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 14cfd195-a92f-4de7-8578-1f60ecef3846 correlation 4a47988d-c366-4467-9978-4cee7d47e3c8 created: 2025-03-17T17:53:27.394012Z] Mar 17 17:53:39.023758 waagent[1961]: 2025-03-17T17:53:39.023717Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:53:39.024312 waagent[1961]: 2025-03-17T17:53:39.024274Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Mar 17 17:53:44.935982 sshd[4744]: Connection closed by 60.13.138.157 port 64326 [preauth] Mar 17 17:53:44.936633 systemd[1]: sshd@8-10.200.20.19:22-60.13.138.157:64326.service: Deactivated successfully. Mar 17 17:55:22.738719 systemd[1]: Started sshd@9-10.200.20.19:22-10.200.16.10:45436.service - OpenSSH per-connection server daemon (10.200.16.10:45436). Mar 17 17:55:23.221573 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 45436 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:23.222798 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:23.228808 systemd-logind[1714]: New session 10 of user core. Mar 17 17:55:23.233634 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:55:23.691520 sshd[4779]: Connection closed by 10.200.16.10 port 45436 Mar 17 17:55:23.693225 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:23.696270 systemd[1]: sshd@9-10.200.20.19:22-10.200.16.10:45436.service: Deactivated successfully. Mar 17 17:55:23.697906 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:55:23.698676 systemd-logind[1714]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:55:23.699787 systemd-logind[1714]: Removed session 10. Mar 17 17:55:28.786744 systemd[1]: Started sshd@10-10.200.20.19:22-10.200.16.10:39780.service - OpenSSH per-connection server daemon (10.200.16.10:39780). Mar 17 17:55:29.273079 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 39780 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:29.274305 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:29.278200 systemd-logind[1714]: New session 11 of user core. Mar 17 17:55:29.286615 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:55:29.693288 sshd[4794]: Connection closed by 10.200.16.10 port 39780 Mar 17 17:55:29.693851 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:29.697018 systemd[1]: sshd@10-10.200.20.19:22-10.200.16.10:39780.service: Deactivated successfully. Mar 17 17:55:29.699369 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:55:29.700198 systemd-logind[1714]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:55:29.701054 systemd-logind[1714]: Removed session 11. Mar 17 17:55:34.774687 systemd[1]: Started sshd@11-10.200.20.19:22-10.200.16.10:39782.service - OpenSSH per-connection server daemon (10.200.16.10:39782). Mar 17 17:55:35.221400 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 39782 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:35.222636 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:35.226617 systemd-logind[1714]: New session 12 of user core. Mar 17 17:55:35.236612 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:55:35.601146 sshd[4810]: Connection closed by 10.200.16.10 port 39782 Mar 17 17:55:35.601811 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:35.605172 systemd[1]: sshd@11-10.200.20.19:22-10.200.16.10:39782.service: Deactivated successfully. Mar 17 17:55:35.607945 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:55:35.608969 systemd-logind[1714]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:55:35.609802 systemd-logind[1714]: Removed session 12. Mar 17 17:55:40.683231 systemd[1]: Started sshd@12-10.200.20.19:22-10.200.16.10:52964.service - OpenSSH per-connection server daemon (10.200.16.10:52964). Mar 17 17:55:41.727845 sshd[4823]: Accepted publickey for core from 10.200.16.10 port 52964 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:41.728081 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:41.733439 systemd-logind[1714]: New session 13 of user core. Mar 17 17:55:41.740632 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:55:42.060986 sshd[4825]: Connection closed by 10.200.16.10 port 52964 Mar 17 17:55:42.059985 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:42.063133 systemd[1]: sshd@12-10.200.20.19:22-10.200.16.10:52964.service: Deactivated successfully. Mar 17 17:55:42.064833 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:55:42.065610 systemd-logind[1714]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:55:42.066791 systemd-logind[1714]: Removed session 13. Mar 17 17:55:42.148272 systemd[1]: Started sshd@13-10.200.20.19:22-10.200.16.10:52974.service - OpenSSH per-connection server daemon (10.200.16.10:52974). Mar 17 17:55:42.640092 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 52974 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:42.641338 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:42.645340 systemd-logind[1714]: New session 14 of user core. Mar 17 17:55:42.652643 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:55:43.083159 sshd[4840]: Connection closed by 10.200.16.10 port 52974 Mar 17 17:55:43.083942 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:43.087406 systemd-logind[1714]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:55:43.088379 systemd[1]: sshd@13-10.200.20.19:22-10.200.16.10:52974.service: Deactivated successfully. Mar 17 17:55:43.090969 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:55:43.092226 systemd-logind[1714]: Removed session 14. Mar 17 17:55:43.170639 systemd[1]: Started sshd@14-10.200.20.19:22-10.200.16.10:52984.service - OpenSSH per-connection server daemon (10.200.16.10:52984). Mar 17 17:55:43.661146 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 52984 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:43.662442 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:43.667606 systemd-logind[1714]: New session 15 of user core. Mar 17 17:55:43.676650 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:55:44.067576 sshd[4852]: Connection closed by 10.200.16.10 port 52984 Mar 17 17:55:44.067958 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:44.071727 systemd[1]: sshd@14-10.200.20.19:22-10.200.16.10:52984.service: Deactivated successfully. Mar 17 17:55:44.073360 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:55:44.074227 systemd-logind[1714]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:55:44.075095 systemd-logind[1714]: Removed session 15. Mar 17 17:55:49.151278 systemd[1]: Started sshd@15-10.200.20.19:22-10.200.16.10:43870.service - OpenSSH per-connection server daemon (10.200.16.10:43870). Mar 17 17:55:49.597992 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 43870 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:49.599368 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:49.603489 systemd-logind[1714]: New session 16 of user core. Mar 17 17:55:49.611615 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:55:49.983122 sshd[4865]: Connection closed by 10.200.16.10 port 43870 Mar 17 17:55:49.983691 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:49.986308 systemd-logind[1714]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:55:49.986562 systemd[1]: sshd@15-10.200.20.19:22-10.200.16.10:43870.service: Deactivated successfully. Mar 17 17:55:49.988149 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:55:49.990098 systemd-logind[1714]: Removed session 16. Mar 17 17:55:55.074698 systemd[1]: Started sshd@16-10.200.20.19:22-10.200.16.10:43886.service - OpenSSH per-connection server daemon (10.200.16.10:43886). Mar 17 17:55:55.561533 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 43886 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:55.562773 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:55.566825 systemd-logind[1714]: New session 17 of user core. Mar 17 17:55:55.577602 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:55:55.983586 sshd[4879]: Connection closed by 10.200.16.10 port 43886 Mar 17 17:55:55.984156 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:55.987451 systemd-logind[1714]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:55:55.988417 systemd[1]: sshd@16-10.200.20.19:22-10.200.16.10:43886.service: Deactivated successfully. Mar 17 17:55:55.990266 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:55:55.991451 systemd-logind[1714]: Removed session 17. Mar 17 17:55:56.063675 systemd[1]: Started sshd@17-10.200.20.19:22-10.200.16.10:43896.service - OpenSSH per-connection server daemon (10.200.16.10:43896). Mar 17 17:55:56.510510 sshd[4891]: Accepted publickey for core from 10.200.16.10 port 43896 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:56.511848 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:56.517553 systemd-logind[1714]: New session 18 of user core. Mar 17 17:55:56.523613 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:55:56.924174 sshd[4893]: Connection closed by 10.200.16.10 port 43896 Mar 17 17:55:56.924783 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:56.927815 systemd[1]: sshd@17-10.200.20.19:22-10.200.16.10:43896.service: Deactivated successfully. Mar 17 17:55:56.929321 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:55:56.931372 systemd-logind[1714]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:55:56.932635 systemd-logind[1714]: Removed session 18. Mar 17 17:55:57.019698 systemd[1]: Started sshd@18-10.200.20.19:22-10.200.16.10:43898.service - OpenSSH per-connection server daemon (10.200.16.10:43898). Mar 17 17:55:57.506091 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 43898 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:57.507353 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:57.512080 systemd-logind[1714]: New session 19 of user core. Mar 17 17:55:57.519623 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:55:59.174331 sshd[4908]: Connection closed by 10.200.16.10 port 43898 Mar 17 17:55:59.175151 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:59.178351 systemd[1]: sshd@18-10.200.20.19:22-10.200.16.10:43898.service: Deactivated successfully. Mar 17 17:55:59.181000 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:55:59.181854 systemd-logind[1714]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:55:59.182972 systemd-logind[1714]: Removed session 19. Mar 17 17:55:59.256134 systemd[1]: Started sshd@19-10.200.20.19:22-10.200.16.10:59880.service - OpenSSH per-connection server daemon (10.200.16.10:59880). Mar 17 17:55:59.704579 sshd[4925]: Accepted publickey for core from 10.200.16.10 port 59880 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:55:59.705795 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:59.710145 systemd-logind[1714]: New session 20 of user core. Mar 17 17:55:59.717611 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:56:00.195977 sshd[4927]: Connection closed by 10.200.16.10 port 59880 Mar 17 17:56:00.195444 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:00.198627 systemd-logind[1714]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:56:00.199212 systemd[1]: sshd@19-10.200.20.19:22-10.200.16.10:59880.service: Deactivated successfully. Mar 17 17:56:00.201252 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:56:00.202208 systemd-logind[1714]: Removed session 20. Mar 17 17:56:00.286692 systemd[1]: Started sshd@20-10.200.20.19:22-10.200.16.10:59884.service - OpenSSH per-connection server daemon (10.200.16.10:59884). Mar 17 17:56:00.773985 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 59884 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:00.775328 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:00.780633 systemd-logind[1714]: New session 21 of user core. Mar 17 17:56:00.786623 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:56:01.197046 sshd[4938]: Connection closed by 10.200.16.10 port 59884 Mar 17 17:56:01.197611 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:01.201485 systemd-logind[1714]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:56:01.201698 systemd[1]: sshd@20-10.200.20.19:22-10.200.16.10:59884.service: Deactivated successfully. Mar 17 17:56:01.203763 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:56:01.205308 systemd-logind[1714]: Removed session 21. Mar 17 17:56:06.284705 systemd[1]: Started sshd@21-10.200.20.19:22-10.200.16.10:59892.service - OpenSSH per-connection server daemon (10.200.16.10:59892). Mar 17 17:56:06.728949 sshd[4951]: Accepted publickey for core from 10.200.16.10 port 59892 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:06.730190 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:06.735246 systemd-logind[1714]: New session 22 of user core. Mar 17 17:56:06.743610 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:56:07.110596 sshd[4953]: Connection closed by 10.200.16.10 port 59892 Mar 17 17:56:07.111125 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:07.114548 systemd[1]: sshd@21-10.200.20.19:22-10.200.16.10:59892.service: Deactivated successfully. Mar 17 17:56:07.117252 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:56:07.118692 systemd-logind[1714]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:56:07.119751 systemd-logind[1714]: Removed session 22. Mar 17 17:56:12.198687 systemd[1]: Started sshd@22-10.200.20.19:22-10.200.16.10:46454.service - OpenSSH per-connection server daemon (10.200.16.10:46454). Mar 17 17:56:12.643095 sshd[4967]: Accepted publickey for core from 10.200.16.10 port 46454 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:12.644277 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:12.648969 systemd-logind[1714]: New session 23 of user core. Mar 17 17:56:12.658670 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:56:13.021547 sshd[4969]: Connection closed by 10.200.16.10 port 46454 Mar 17 17:56:13.022095 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:13.025271 systemd[1]: sshd@22-10.200.20.19:22-10.200.16.10:46454.service: Deactivated successfully. Mar 17 17:56:13.027735 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:56:13.028501 systemd-logind[1714]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:56:13.029404 systemd-logind[1714]: Removed session 23. Mar 17 17:56:18.114731 systemd[1]: Started sshd@23-10.200.20.19:22-10.200.16.10:46462.service - OpenSSH per-connection server daemon (10.200.16.10:46462). Mar 17 17:56:18.597622 sshd[4982]: Accepted publickey for core from 10.200.16.10 port 46462 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:18.599407 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:18.604738 systemd-logind[1714]: New session 24 of user core. Mar 17 17:56:18.613742 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:56:19.017210 sshd[4984]: Connection closed by 10.200.16.10 port 46462 Mar 17 17:56:19.017823 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:19.021218 systemd[1]: sshd@23-10.200.20.19:22-10.200.16.10:46462.service: Deactivated successfully. Mar 17 17:56:19.023014 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:56:19.023747 systemd-logind[1714]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:56:19.024573 systemd-logind[1714]: Removed session 24. Mar 17 17:56:24.109718 systemd[1]: Started sshd@24-10.200.20.19:22-10.200.16.10:36148.service - OpenSSH per-connection server daemon (10.200.16.10:36148). Mar 17 17:56:24.552863 sshd[4996]: Accepted publickey for core from 10.200.16.10 port 36148 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:24.554174 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:24.559548 systemd-logind[1714]: New session 25 of user core. Mar 17 17:56:24.563708 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:56:24.934640 sshd[5001]: Connection closed by 10.200.16.10 port 36148 Mar 17 17:56:24.934465 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:24.937303 systemd[1]: sshd@24-10.200.20.19:22-10.200.16.10:36148.service: Deactivated successfully. Mar 17 17:56:24.940039 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:56:24.942306 systemd-logind[1714]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:56:24.943294 systemd-logind[1714]: Removed session 25. Mar 17 17:56:25.022103 systemd[1]: Started sshd@25-10.200.20.19:22-10.200.16.10:36152.service - OpenSSH per-connection server daemon (10.200.16.10:36152). Mar 17 17:56:25.507728 sshd[5013]: Accepted publickey for core from 10.200.16.10 port 36152 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:25.509438 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:25.514217 systemd-logind[1714]: New session 26 of user core. Mar 17 17:56:25.520687 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:56:27.584611 containerd[1749]: time="2025-03-17T17:56:27.584557352Z" level=info msg="StopContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" with timeout 30 (s)" Mar 17 17:56:27.587028 containerd[1749]: time="2025-03-17T17:56:27.586866316Z" level=info msg="Stop container \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" with signal terminated" Mar 17 17:56:27.597648 systemd[1]: cri-containerd-aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675.scope: Deactivated successfully. Mar 17 17:56:27.603281 containerd[1749]: time="2025-03-17T17:56:27.602689222Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:56:27.610268 containerd[1749]: time="2025-03-17T17:56:27.610236275Z" level=info msg="StopContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" with timeout 2 (s)" Mar 17 17:56:27.610801 containerd[1749]: time="2025-03-17T17:56:27.610710595Z" level=info msg="Stop container \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" with signal terminated" Mar 17 17:56:27.619946 systemd-networkd[1432]: lxc_health: Link DOWN Mar 17 17:56:27.619952 systemd-networkd[1432]: lxc_health: Lost carrier Mar 17 17:56:27.628094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675-rootfs.mount: Deactivated successfully. Mar 17 17:56:27.649789 systemd[1]: cri-containerd-2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5.scope: Deactivated successfully. Mar 17 17:56:27.650383 systemd[1]: cri-containerd-2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5.scope: Consumed 6.205s CPU time, 124.4M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:56:27.668640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5-rootfs.mount: Deactivated successfully. Mar 17 17:56:27.695784 containerd[1749]: time="2025-03-17T17:56:27.695690132Z" level=info msg="shim disconnected" id=aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675 namespace=k8s.io Mar 17 17:56:27.695784 containerd[1749]: time="2025-03-17T17:56:27.695744452Z" level=warning msg="cleaning up after shim disconnected" id=aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675 namespace=k8s.io Mar 17 17:56:27.695784 containerd[1749]: time="2025-03-17T17:56:27.695752452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:27.696617 containerd[1749]: time="2025-03-17T17:56:27.696583334Z" level=info msg="shim disconnected" id=2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5 namespace=k8s.io Mar 17 17:56:27.696836 containerd[1749]: time="2025-03-17T17:56:27.696720854Z" level=warning msg="cleaning up after shim disconnected" id=2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5 namespace=k8s.io Mar 17 17:56:27.696836 containerd[1749]: time="2025-03-17T17:56:27.696737134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:27.721554 containerd[1749]: time="2025-03-17T17:56:27.721513694Z" level=info msg="StopContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" returns successfully" Mar 17 17:56:27.722318 containerd[1749]: time="2025-03-17T17:56:27.722291975Z" level=info msg="StopPodSandbox for \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\"" Mar 17 17:56:27.722424 containerd[1749]: time="2025-03-17T17:56:27.722331655Z" level=info msg="Container to stop \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.722424 containerd[1749]: time="2025-03-17T17:56:27.722342135Z" level=info msg="Container to stop \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.722424 containerd[1749]: time="2025-03-17T17:56:27.722349775Z" level=info msg="Container to stop \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.722424 containerd[1749]: time="2025-03-17T17:56:27.722357975Z" level=info msg="Container to stop \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.722424 containerd[1749]: time="2025-03-17T17:56:27.722365615Z" level=info msg="Container to stop \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.724904 containerd[1749]: time="2025-03-17T17:56:27.724755579Z" level=info msg="StopContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" returns successfully" Mar 17 17:56:27.724079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9-shm.mount: Deactivated successfully. Mar 17 17:56:27.725305 containerd[1749]: time="2025-03-17T17:56:27.725155980Z" level=info msg="StopPodSandbox for \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\"" Mar 17 17:56:27.725360 containerd[1749]: time="2025-03-17T17:56:27.725330060Z" level=info msg="Container to stop \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:56:27.728398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480-shm.mount: Deactivated successfully. Mar 17 17:56:27.732160 systemd[1]: cri-containerd-df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9.scope: Deactivated successfully. Mar 17 17:56:27.736997 systemd[1]: cri-containerd-aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480.scope: Deactivated successfully. Mar 17 17:56:27.796880 containerd[1749]: time="2025-03-17T17:56:27.796822975Z" level=info msg="shim disconnected" id=aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480 namespace=k8s.io Mar 17 17:56:27.796880 containerd[1749]: time="2025-03-17T17:56:27.796870535Z" level=warning msg="cleaning up after shim disconnected" id=aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480 namespace=k8s.io Mar 17 17:56:27.796880 containerd[1749]: time="2025-03-17T17:56:27.796880415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:27.797510 containerd[1749]: time="2025-03-17T17:56:27.797456056Z" level=info msg="shim disconnected" id=df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9 namespace=k8s.io Mar 17 17:56:27.797668 containerd[1749]: time="2025-03-17T17:56:27.797582536Z" level=warning msg="cleaning up after shim disconnected" id=df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9 namespace=k8s.io Mar 17 17:56:27.797668 containerd[1749]: time="2025-03-17T17:56:27.797594576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:27.810802 containerd[1749]: time="2025-03-17T17:56:27.810749237Z" level=info msg="TearDown network for sandbox \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\" successfully" Mar 17 17:56:27.810802 containerd[1749]: time="2025-03-17T17:56:27.810785397Z" level=info msg="StopPodSandbox for \"aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480\" returns successfully" Mar 17 17:56:27.811696 containerd[1749]: time="2025-03-17T17:56:27.811611599Z" level=info msg="TearDown network for sandbox \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" successfully" Mar 17 17:56:27.811696 containerd[1749]: time="2025-03-17T17:56:27.811631879Z" level=info msg="StopPodSandbox for \"df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9\" returns successfully" Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988169 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnpln\" (UniqueName: \"kubernetes.io/projected/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-kube-api-access-dnpln\") pod \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\" (UID: \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\") " Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988218 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cni-path\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988236 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-run\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988258 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce310019-ee05-46cf-a81e-ca102e7a26aa-clustermesh-secrets\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988273 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-xtables-lock\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988214 kubelet[3374]: I0317 17:56:27.988290 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg5wv\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-kube-api-access-cg5wv\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988306 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-kernel\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988321 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-etc-cni-netd\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988337 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-cilium-config-path\") pod \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\" (UID: \"c4171d92-9c65-42b8-aa9b-a80fd78cb1fd\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988352 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-hostproc\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988367 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-net\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.988996 kubelet[3374]: I0317 17:56:27.988382 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-bpf-maps\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988398 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-cgroup\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988416 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-hubble-tls\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988432 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-config-path\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988446 3374 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-lib-modules\") pod \"ce310019-ee05-46cf-a81e-ca102e7a26aa\" (UID: \"ce310019-ee05-46cf-a81e-ca102e7a26aa\") " Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988512 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.989122 kubelet[3374]: I0317 17:56:27.988547 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.989242 kubelet[3374]: I0317 17:56:27.988562 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991011 kubelet[3374]: I0317 17:56:27.990611 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce310019-ee05-46cf-a81e-ca102e7a26aa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:56:27.991011 kubelet[3374]: I0317 17:56:27.990665 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991011 kubelet[3374]: I0317 17:56:27.990714 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991011 kubelet[3374]: I0317 17:56:27.990740 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991011 kubelet[3374]: I0317 17:56:27.990774 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991736 kubelet[3374]: I0317 17:56:27.991582 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991736 kubelet[3374]: I0317 17:56:27.991646 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.991736 kubelet[3374]: I0317 17:56:27.991663 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:56:27.993707 kubelet[3374]: I0317 17:56:27.993570 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-kube-api-access-dnpln" (OuterVolumeSpecName: "kube-api-access-dnpln") pod "c4171d92-9c65-42b8-aa9b-a80fd78cb1fd" (UID: "c4171d92-9c65-42b8-aa9b-a80fd78cb1fd"). InnerVolumeSpecName "kube-api-access-dnpln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:27.994083 kubelet[3374]: I0317 17:56:27.994049 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:27.994554 kubelet[3374]: I0317 17:56:27.994425 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-kube-api-access-cg5wv" (OuterVolumeSpecName: "kube-api-access-cg5wv") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "kube-api-access-cg5wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:27.997037 kubelet[3374]: I0317 17:56:27.996952 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4171d92-9c65-42b8-aa9b-a80fd78cb1fd" (UID: "c4171d92-9c65-42b8-aa9b-a80fd78cb1fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:56:27.997439 kubelet[3374]: I0317 17:56:27.997411 3374 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce310019-ee05-46cf-a81e-ca102e7a26aa" (UID: "ce310019-ee05-46cf-a81e-ca102e7a26aa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:56:28.088607 kubelet[3374]: I0317 17:56:28.088555 3374 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-net\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088607 kubelet[3374]: I0317 17:56:28.088590 3374 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-bpf-maps\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088607 kubelet[3374]: I0317 17:56:28.088608 3374 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-cgroup\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088607 kubelet[3374]: I0317 17:56:28.088618 3374 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-hubble-tls\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088627 3374 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-config-path\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088634 3374 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-lib-modules\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088641 3374 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cni-path\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088648 3374 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-cilium-run\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088656 3374 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce310019-ee05-46cf-a81e-ca102e7a26aa-clustermesh-secrets\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088663 3374 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dnpln\" (UniqueName: \"kubernetes.io/projected/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-kube-api-access-dnpln\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088673 3374 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cg5wv\" (UniqueName: \"kubernetes.io/projected/ce310019-ee05-46cf-a81e-ca102e7a26aa-kube-api-access-cg5wv\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088806 kubelet[3374]: I0317 17:56:28.088682 3374 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-host-proc-sys-kernel\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088961 kubelet[3374]: I0317 17:56:28.088691 3374 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-xtables-lock\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088961 kubelet[3374]: I0317 17:56:28.088700 3374 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd-cilium-config-path\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088961 kubelet[3374]: I0317 17:56:28.088707 3374 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-hostproc\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.088961 kubelet[3374]: I0317 17:56:28.088715 3374 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce310019-ee05-46cf-a81e-ca102e7a26aa-etc-cni-netd\") on node \"ci-4230.1.0-a-76d88708f5\" DevicePath \"\"" Mar 17 17:56:28.347079 kubelet[3374]: I0317 17:56:28.346976 3374 scope.go:117] "RemoveContainer" containerID="aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675" Mar 17 17:56:28.352515 systemd[1]: Removed slice kubepods-besteffort-podc4171d92_9c65_42b8_aa9b_a80fd78cb1fd.slice - libcontainer container kubepods-besteffort-podc4171d92_9c65_42b8_aa9b_a80fd78cb1fd.slice. Mar 17 17:56:28.353135 containerd[1749]: time="2025-03-17T17:56:28.352678870Z" level=info msg="RemoveContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\"" Mar 17 17:56:28.360525 systemd[1]: Removed slice kubepods-burstable-podce310019_ee05_46cf_a81e_ca102e7a26aa.slice - libcontainer container kubepods-burstable-podce310019_ee05_46cf_a81e_ca102e7a26aa.slice. Mar 17 17:56:28.360731 systemd[1]: kubepods-burstable-podce310019_ee05_46cf_a81e_ca102e7a26aa.slice: Consumed 6.270s CPU time, 124.8M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:56:28.363233 containerd[1749]: time="2025-03-17T17:56:28.363104207Z" level=info msg="RemoveContainer for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" returns successfully" Mar 17 17:56:28.363365 kubelet[3374]: I0317 17:56:28.363340 3374 scope.go:117] "RemoveContainer" containerID="aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675" Mar 17 17:56:28.363650 containerd[1749]: time="2025-03-17T17:56:28.363577967Z" level=error msg="ContainerStatus for \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\": not found" Mar 17 17:56:28.363716 kubelet[3374]: E0317 17:56:28.363680 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\": not found" containerID="aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675" Mar 17 17:56:28.363782 kubelet[3374]: I0317 17:56:28.363711 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675"} err="failed to get container status \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\": rpc error: code = NotFound desc = an error occurred when try to find container \"aadc2231c7fb1afbfe6ab30e6a252e4d565128e2059fc4911080a5e28ac84675\": not found" Mar 17 17:56:28.363809 kubelet[3374]: I0317 17:56:28.363785 3374 scope.go:117] "RemoveContainer" containerID="2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5" Mar 17 17:56:28.364673 containerd[1749]: time="2025-03-17T17:56:28.364604689Z" level=info msg="RemoveContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\"" Mar 17 17:56:28.380492 containerd[1749]: time="2025-03-17T17:56:28.379875594Z" level=info msg="RemoveContainer for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" returns successfully" Mar 17 17:56:28.380668 kubelet[3374]: I0317 17:56:28.380643 3374 scope.go:117] "RemoveContainer" containerID="bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd" Mar 17 17:56:28.381574 containerd[1749]: time="2025-03-17T17:56:28.381552676Z" level=info msg="RemoveContainer for \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\"" Mar 17 17:56:28.401495 containerd[1749]: time="2025-03-17T17:56:28.401448628Z" level=info msg="RemoveContainer for \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\" returns successfully" Mar 17 17:56:28.401841 kubelet[3374]: I0317 17:56:28.401693 3374 scope.go:117] "RemoveContainer" containerID="6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7" Mar 17 17:56:28.403108 containerd[1749]: time="2025-03-17T17:56:28.403071391Z" level=info msg="RemoveContainer for \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\"" Mar 17 17:56:28.416189 containerd[1749]: time="2025-03-17T17:56:28.416099972Z" level=info msg="RemoveContainer for \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\" returns successfully" Mar 17 17:56:28.416369 kubelet[3374]: I0317 17:56:28.416335 3374 scope.go:117] "RemoveContainer" containerID="f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd" Mar 17 17:56:28.417915 containerd[1749]: time="2025-03-17T17:56:28.417887135Z" level=info msg="RemoveContainer for \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\"" Mar 17 17:56:28.428912 containerd[1749]: time="2025-03-17T17:56:28.428842472Z" level=info msg="RemoveContainer for \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\" returns successfully" Mar 17 17:56:28.429457 kubelet[3374]: I0317 17:56:28.429049 3374 scope.go:117] "RemoveContainer" containerID="4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2" Mar 17 17:56:28.430177 containerd[1749]: time="2025-03-17T17:56:28.430157475Z" level=info msg="RemoveContainer for \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\"" Mar 17 17:56:28.448549 containerd[1749]: time="2025-03-17T17:56:28.448499584Z" level=info msg="RemoveContainer for \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\" returns successfully" Mar 17 17:56:28.448836 kubelet[3374]: I0317 17:56:28.448804 3374 scope.go:117] "RemoveContainer" containerID="2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5" Mar 17 17:56:28.449126 containerd[1749]: time="2025-03-17T17:56:28.449092665Z" level=error msg="ContainerStatus for \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\": not found" Mar 17 17:56:28.449456 kubelet[3374]: E0317 17:56:28.449336 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\": not found" containerID="2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5" Mar 17 17:56:28.449456 kubelet[3374]: I0317 17:56:28.449364 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5"} err="failed to get container status \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d5e2feee3089d9d8ba492262881b07a07f61429c044eed2a6a4f184a3adf6b5\": not found" Mar 17 17:56:28.449456 kubelet[3374]: I0317 17:56:28.449384 3374 scope.go:117] "RemoveContainer" containerID="bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd" Mar 17 17:56:28.449728 containerd[1749]: time="2025-03-17T17:56:28.449678506Z" level=error msg="ContainerStatus for \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\": not found" Mar 17 17:56:28.449861 kubelet[3374]: E0317 17:56:28.449822 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\": not found" containerID="bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd" Mar 17 17:56:28.449893 kubelet[3374]: I0317 17:56:28.449865 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd"} err="failed to get container status \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf3315a4dbeb7a3ef81d2d2f367766e41b4915e9278888c542fdab1f126d35bd\": not found" Mar 17 17:56:28.449893 kubelet[3374]: I0317 17:56:28.449882 3374 scope.go:117] "RemoveContainer" containerID="6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7" Mar 17 17:56:28.450192 containerd[1749]: time="2025-03-17T17:56:28.450149667Z" level=error msg="ContainerStatus for \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\": not found" Mar 17 17:56:28.450313 kubelet[3374]: E0317 17:56:28.450287 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\": not found" containerID="6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7" Mar 17 17:56:28.450381 kubelet[3374]: I0317 17:56:28.450316 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7"} err="failed to get container status \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6507732d8f7484c381d2f7735a6ee33710f6788159c0ac95e2c35fa4a55f5ff7\": not found" Mar 17 17:56:28.450381 kubelet[3374]: I0317 17:56:28.450331 3374 scope.go:117] "RemoveContainer" containerID="f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd" Mar 17 17:56:28.450632 containerd[1749]: time="2025-03-17T17:56:28.450559947Z" level=error msg="ContainerStatus for \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\": not found" Mar 17 17:56:28.450869 kubelet[3374]: E0317 17:56:28.450761 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\": not found" containerID="f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd" Mar 17 17:56:28.450869 kubelet[3374]: I0317 17:56:28.450788 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd"} err="failed to get container status \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f92be7a56d212e85a1daf3cb1401596bbfb94fd43200644bfb208372f1cd9dfd\": not found" Mar 17 17:56:28.450869 kubelet[3374]: I0317 17:56:28.450804 3374 scope.go:117] "RemoveContainer" containerID="4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2" Mar 17 17:56:28.451020 containerd[1749]: time="2025-03-17T17:56:28.450984788Z" level=error msg="ContainerStatus for \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\": not found" Mar 17 17:56:28.451252 kubelet[3374]: E0317 17:56:28.451193 3374 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\": not found" containerID="4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2" Mar 17 17:56:28.451252 kubelet[3374]: I0317 17:56:28.451235 3374 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2"} err="failed to get container status \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4232f07751dd9bba4b9efcb8e3b6cba56d9f3983bfb079a3d8c7a0aaf1222ef2\": not found" Mar 17 17:56:28.581017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa4b7624fe79fc707b94d6dc243d268664cb605e14b6b7def07ecf8b6142480-rootfs.mount: Deactivated successfully. Mar 17 17:56:28.581102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df2bef1438ad461d004ce64632ab4db91063a0bcc43e2548606132e83b1ac2c9-rootfs.mount: Deactivated successfully. Mar 17 17:56:28.581157 systemd[1]: var-lib-kubelet-pods-c4171d92\x2d9c65\x2d42b8\x2daa9b\x2da80fd78cb1fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddnpln.mount: Deactivated successfully. Mar 17 17:56:28.581208 systemd[1]: var-lib-kubelet-pods-ce310019\x2dee05\x2d46cf\x2da81e\x2dca102e7a26aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcg5wv.mount: Deactivated successfully. Mar 17 17:56:28.581263 systemd[1]: var-lib-kubelet-pods-ce310019\x2dee05\x2d46cf\x2da81e\x2dca102e7a26aa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:56:28.581311 systemd[1]: var-lib-kubelet-pods-ce310019\x2dee05\x2d46cf\x2da81e\x2dca102e7a26aa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:56:28.921536 kubelet[3374]: I0317 17:56:28.921492 3374 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4171d92-9c65-42b8-aa9b-a80fd78cb1fd" path="/var/lib/kubelet/pods/c4171d92-9c65-42b8-aa9b-a80fd78cb1fd/volumes" Mar 17 17:56:28.921922 kubelet[3374]: I0317 17:56:28.921898 3374 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" path="/var/lib/kubelet/pods/ce310019-ee05-46cf-a81e-ca102e7a26aa/volumes" Mar 17 17:56:29.591514 sshd[5015]: Connection closed by 10.200.16.10 port 36152 Mar 17 17:56:29.592080 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:29.595218 systemd-logind[1714]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:56:29.595745 systemd[1]: sshd@25-10.200.20.19:22-10.200.16.10:36152.service: Deactivated successfully. Mar 17 17:56:29.597704 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:56:29.597961 systemd[1]: session-26.scope: Consumed 1.142s CPU time, 23.5M memory peak. Mar 17 17:56:29.600136 systemd-logind[1714]: Removed session 26. Mar 17 17:56:29.672392 systemd[1]: Started sshd@26-10.200.20.19:22-10.200.16.10:39126.service - OpenSSH per-connection server daemon (10.200.16.10:39126). Mar 17 17:56:30.120302 sshd[5178]: Accepted publickey for core from 10.200.16.10 port 39126 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:30.121660 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:30.126077 systemd-logind[1714]: New session 27 of user core. Mar 17 17:56:30.135640 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:56:32.028506 kubelet[3374]: E0317 17:56:32.028449 3374 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:56:32.048484 kubelet[3374]: I0317 17:56:32.046299 3374 setters.go:600] "Node became not ready" node="ci-4230.1.0-a-76d88708f5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:56:32Z","lastTransitionTime":"2025-03-17T17:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:56:32.448872 kubelet[3374]: E0317 17:56:32.448828 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="mount-bpf-fs" Mar 17 17:56:32.448872 kubelet[3374]: E0317 17:56:32.448860 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="mount-cgroup" Mar 17 17:56:32.448872 kubelet[3374]: E0317 17:56:32.448867 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="apply-sysctl-overwrites" Mar 17 17:56:32.449038 kubelet[3374]: E0317 17:56:32.448873 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4171d92-9c65-42b8-aa9b-a80fd78cb1fd" containerName="cilium-operator" Mar 17 17:56:32.449038 kubelet[3374]: E0317 17:56:32.448984 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="clean-cilium-state" Mar 17 17:56:32.449038 kubelet[3374]: E0317 17:56:32.448992 3374 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="cilium-agent" Mar 17 17:56:32.449038 kubelet[3374]: I0317 17:56:32.449018 3374 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce310019-ee05-46cf-a81e-ca102e7a26aa" containerName="cilium-agent" Mar 17 17:56:32.449038 kubelet[3374]: I0317 17:56:32.449024 3374 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4171d92-9c65-42b8-aa9b-a80fd78cb1fd" containerName="cilium-operator" Mar 17 17:56:32.458446 systemd[1]: Created slice kubepods-burstable-pod4f3e5762_d8c0_4b1d_979d_6edb5e16f78b.slice - libcontainer container kubepods-burstable-pod4f3e5762_d8c0_4b1d_979d_6edb5e16f78b.slice. Mar 17 17:56:32.497846 sshd[5180]: Connection closed by 10.200.16.10 port 39126 Mar 17 17:56:32.498425 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:32.503108 systemd[1]: sshd@26-10.200.20.19:22-10.200.16.10:39126.service: Deactivated successfully. Mar 17 17:56:32.508011 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:56:32.508915 systemd[1]: session-27.scope: Consumed 1.970s CPU time, 25.6M memory peak. Mar 17 17:56:32.510339 systemd-logind[1714]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:56:32.511974 systemd-logind[1714]: Removed session 27. Mar 17 17:56:32.585739 systemd[1]: Started sshd@27-10.200.20.19:22-10.200.16.10:39138.service - OpenSSH per-connection server daemon (10.200.16.10:39138). Mar 17 17:56:32.611090 kubelet[3374]: I0317 17:56:32.611047 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-cilium-config-path\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611276 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-host-proc-sys-kernel\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611312 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-cilium-run\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611331 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-hostproc\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611347 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-xtables-lock\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611363 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-cilium-cgroup\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611608 kubelet[3374]: I0317 17:56:32.611379 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-cilium-ipsec-secrets\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611395 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-cni-path\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611409 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hhb2\" (UniqueName: \"kubernetes.io/projected/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-kube-api-access-8hhb2\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611427 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-clustermesh-secrets\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611442 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-bpf-maps\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611457 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-etc-cni-netd\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611768 kubelet[3374]: I0317 17:56:32.611472 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-hubble-tls\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611884 kubelet[3374]: I0317 17:56:32.611514 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-lib-modules\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.611884 kubelet[3374]: I0317 17:56:32.611537 3374 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f3e5762-d8c0-4b1d-979d-6edb5e16f78b-host-proc-sys-net\") pod \"cilium-jnhxh\" (UID: \"4f3e5762-d8c0-4b1d-979d-6edb5e16f78b\") " pod="kube-system/cilium-jnhxh" Mar 17 17:56:32.762779 containerd[1749]: time="2025-03-17T17:56:32.762668929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnhxh,Uid:4f3e5762-d8c0-4b1d-979d-6edb5e16f78b,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:32.800811 containerd[1749]: time="2025-03-17T17:56:32.800247110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:32.800811 containerd[1749]: time="2025-03-17T17:56:32.800354030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:32.800811 containerd[1749]: time="2025-03-17T17:56:32.800381270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:32.800811 containerd[1749]: time="2025-03-17T17:56:32.800548510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:32.818681 systemd[1]: Started cri-containerd-0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0.scope - libcontainer container 0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0. Mar 17 17:56:32.838066 containerd[1749]: time="2025-03-17T17:56:32.837995731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnhxh,Uid:4f3e5762-d8c0-4b1d-979d-6edb5e16f78b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\"" Mar 17 17:56:32.841388 containerd[1749]: time="2025-03-17T17:56:32.841335496Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:56:32.876902 containerd[1749]: time="2025-03-17T17:56:32.876861233Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5\"" Mar 17 17:56:32.877933 containerd[1749]: time="2025-03-17T17:56:32.877859355Z" level=info msg="StartContainer for \"7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5\"" Mar 17 17:56:32.900718 systemd[1]: Started cri-containerd-7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5.scope - libcontainer container 7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5. Mar 17 17:56:32.931813 containerd[1749]: time="2025-03-17T17:56:32.931760122Z" level=info msg="StartContainer for \"7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5\" returns successfully" Mar 17 17:56:32.934310 systemd[1]: cri-containerd-7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5.scope: Deactivated successfully. Mar 17 17:56:33.009746 containerd[1749]: time="2025-03-17T17:56:33.009468047Z" level=info msg="shim disconnected" id=7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5 namespace=k8s.io Mar 17 17:56:33.009746 containerd[1749]: time="2025-03-17T17:56:33.009637167Z" level=warning msg="cleaning up after shim disconnected" id=7dfea5a38956ecd2d91819661821c91d2a41dc05580a76f89ce3a390ed6656d5 namespace=k8s.io Mar 17 17:56:33.009746 containerd[1749]: time="2025-03-17T17:56:33.009647967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:33.020582 containerd[1749]: time="2025-03-17T17:56:33.019742423Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:56:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:56:33.031129 sshd[5190]: Accepted publickey for core from 10.200.16.10 port 39138 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:33.032439 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:33.036777 systemd-logind[1714]: New session 28 of user core. Mar 17 17:56:33.043647 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:56:33.350370 sshd[5300]: Connection closed by 10.200.16.10 port 39138 Mar 17 17:56:33.350218 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:33.352975 systemd[1]: sshd@27-10.200.20.19:22-10.200.16.10:39138.service: Deactivated successfully. Mar 17 17:56:33.355333 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:56:33.357155 systemd-logind[1714]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:56:33.358244 systemd-logind[1714]: Removed session 28. Mar 17 17:56:33.370972 containerd[1749]: time="2025-03-17T17:56:33.370926509Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:56:33.413236 containerd[1749]: time="2025-03-17T17:56:33.413156777Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0\"" Mar 17 17:56:33.414135 containerd[1749]: time="2025-03-17T17:56:33.413853178Z" level=info msg="StartContainer for \"1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0\"" Mar 17 17:56:33.440457 systemd[1]: Started sshd@28-10.200.20.19:22-10.200.16.10:39152.service - OpenSSH per-connection server daemon (10.200.16.10:39152). Mar 17 17:56:33.445645 systemd[1]: Started cri-containerd-1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0.scope - libcontainer container 1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0. Mar 17 17:56:33.484155 systemd[1]: cri-containerd-1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0.scope: Deactivated successfully. Mar 17 17:56:33.485419 containerd[1749]: time="2025-03-17T17:56:33.485295773Z" level=info msg="StartContainer for \"1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0\" returns successfully" Mar 17 17:56:33.517063 containerd[1749]: time="2025-03-17T17:56:33.516954264Z" level=info msg="shim disconnected" id=1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0 namespace=k8s.io Mar 17 17:56:33.517063 containerd[1749]: time="2025-03-17T17:56:33.517010704Z" level=warning msg="cleaning up after shim disconnected" id=1b857f32fd5d99d299212a6ca792c5f2215556d5c60d472694354bef6afed6d0 namespace=k8s.io Mar 17 17:56:33.517063 containerd[1749]: time="2025-03-17T17:56:33.517018584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:33.888970 sshd[5323]: Accepted publickey for core from 10.200.16.10 port 39152 ssh2: RSA SHA256:o263vcH4SuOysIKXZsTOtlkNJCrs70lnHQg7wniZ3pY Mar 17 17:56:33.890219 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:33.894267 systemd-logind[1714]: New session 29 of user core. Mar 17 17:56:33.901630 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:56:34.375010 containerd[1749]: time="2025-03-17T17:56:34.374965685Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:56:34.415061 containerd[1749]: time="2025-03-17T17:56:34.415014710Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0\"" Mar 17 17:56:34.415704 containerd[1749]: time="2025-03-17T17:56:34.415666751Z" level=info msg="StartContainer for \"27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0\"" Mar 17 17:56:34.448614 systemd[1]: Started cri-containerd-27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0.scope - libcontainer container 27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0. Mar 17 17:56:34.475182 systemd[1]: cri-containerd-27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0.scope: Deactivated successfully. Mar 17 17:56:34.479571 containerd[1749]: time="2025-03-17T17:56:34.479419453Z" level=info msg="StartContainer for \"27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0\" returns successfully" Mar 17 17:56:34.516528 containerd[1749]: time="2025-03-17T17:56:34.516427833Z" level=info msg="shim disconnected" id=27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0 namespace=k8s.io Mar 17 17:56:34.516528 containerd[1749]: time="2025-03-17T17:56:34.516508913Z" level=warning msg="cleaning up after shim disconnected" id=27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0 namespace=k8s.io Mar 17 17:56:34.516528 containerd[1749]: time="2025-03-17T17:56:34.516520553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:34.717738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27a1390e1ead2571debd3dbccd15a11676b066caa3ea4cb4f39580403ef5e4f0-rootfs.mount: Deactivated successfully. Mar 17 17:56:35.377756 containerd[1749]: time="2025-03-17T17:56:35.377624019Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:56:35.417030 containerd[1749]: time="2025-03-17T17:56:35.416852322Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184\"" Mar 17 17:56:35.417558 containerd[1749]: time="2025-03-17T17:56:35.417322883Z" level=info msg="StartContainer for \"30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184\"" Mar 17 17:56:35.442717 systemd[1]: Started cri-containerd-30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184.scope - libcontainer container 30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184. Mar 17 17:56:35.461239 systemd[1]: cri-containerd-30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184.scope: Deactivated successfully. Mar 17 17:56:35.467811 containerd[1749]: time="2025-03-17T17:56:35.467765404Z" level=info msg="StartContainer for \"30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184\" returns successfully" Mar 17 17:56:35.494570 containerd[1749]: time="2025-03-17T17:56:35.494508687Z" level=info msg="shim disconnected" id=30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184 namespace=k8s.io Mar 17 17:56:35.494570 containerd[1749]: time="2025-03-17T17:56:35.494564208Z" level=warning msg="cleaning up after shim disconnected" id=30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184 namespace=k8s.io Mar 17 17:56:35.494570 containerd[1749]: time="2025-03-17T17:56:35.494572888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:35.717750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a1efe0a91d55c54759bbda517b5b87425d595ac420867438708adb837e1184-rootfs.mount: Deactivated successfully. Mar 17 17:56:36.384276 containerd[1749]: time="2025-03-17T17:56:36.383903062Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:56:36.418792 containerd[1749]: time="2025-03-17T17:56:36.418701448Z" level=info msg="CreateContainer within sandbox \"0acac2712e01526422d0b7c3b2cab9ddb44baa7adf1a78ff9f38a9ce6336dcf0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe\"" Mar 17 17:56:36.419973 containerd[1749]: time="2025-03-17T17:56:36.419599167Z" level=info msg="StartContainer for \"c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe\"" Mar 17 17:56:36.447624 systemd[1]: Started cri-containerd-c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe.scope - libcontainer container c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe. Mar 17 17:56:36.475251 containerd[1749]: time="2025-03-17T17:56:36.475203265Z" level=info msg="StartContainer for \"c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe\" returns successfully" Mar 17 17:56:37.008612 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:56:39.721019 systemd-networkd[1432]: lxc_health: Link UP Mar 17 17:56:39.721560 systemd-networkd[1432]: lxc_health: Gained carrier Mar 17 17:56:40.455566 systemd[1]: run-containerd-runc-k8s.io-c75db5654a659c69caefa8a6862551bdb039d476ac31cbb6671278cdfc0cecfe-runc.3Fy9nA.mount: Deactivated successfully. Mar 17 17:56:40.787408 kubelet[3374]: I0317 17:56:40.786799 3374 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jnhxh" podStartSLOduration=8.786786538 podStartE2EDuration="8.786786538s" podCreationTimestamp="2025-03-17 17:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:37.404566016 +0000 UTC m=+220.591049541" watchObservedRunningTime="2025-03-17 17:56:40.786786538 +0000 UTC m=+223.973270023" Mar 17 17:56:41.119647 systemd-networkd[1432]: lxc_health: Gained IPv6LL Mar 17 17:56:46.936855 sshd[5371]: Connection closed by 10.200.16.10 port 39152 Mar 17 17:56:46.937471 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:46.940240 systemd[1]: sshd@28-10.200.20.19:22-10.200.16.10:39152.service: Deactivated successfully. Mar 17 17:56:46.942543 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:56:46.944740 systemd-logind[1714]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:56:46.946125 systemd-logind[1714]: Removed session 29.