Feb 13 15:31:57.383068 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:31:57.383095 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:31:57.383103 kernel: KASLR enabled Feb 13 15:31:57.383110 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 13 15:31:57.383117 kernel: printk: bootconsole [pl11] enabled Feb 13 15:31:57.383123 kernel: efi: EFI v2.7 by EDK II Feb 13 15:31:57.383130 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Feb 13 15:31:57.383136 kernel: random: crng init done Feb 13 15:31:57.383142 kernel: secureboot: Secure boot disabled Feb 13 15:31:57.383148 kernel: ACPI: Early table checksum verification disabled Feb 13 15:31:57.383154 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Feb 13 15:31:57.383160 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383166 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383174 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Feb 13 15:31:57.383181 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383187 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383194 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383202 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383208 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383214 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383220 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 13 15:31:57.383227 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:31:57.383233 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 13 15:31:57.383239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Feb 13 15:31:57.383245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Feb 13 15:31:57.383251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Feb 13 15:31:57.383257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Feb 13 15:31:57.383264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Feb 13 15:31:57.383271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Feb 13 15:31:57.383278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Feb 13 15:31:57.383284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Feb 13 15:31:57.383290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Feb 13 15:31:57.383296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Feb 13 15:31:57.383302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Feb 13 15:31:57.383308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Feb 13 15:31:57.383315 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Feb 13 15:31:57.383321 kernel: Zone ranges: Feb 13 15:31:57.383327 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 13 15:31:57.383333 kernel: DMA32 empty Feb 13 15:31:57.383340 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:31:57.383351 kernel: Movable zone start for each node Feb 13 15:31:57.383358 kernel: Early memory node ranges Feb 13 15:31:57.383364 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 13 15:31:57.383371 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Feb 13 15:31:57.383378 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Feb 13 15:31:57.385465 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Feb 13 15:31:57.385475 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Feb 13 15:31:57.385482 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Feb 13 15:31:57.385489 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Feb 13 15:31:57.385496 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Feb 13 15:31:57.385503 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 13 15:31:57.385511 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 13 15:31:57.385518 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 13 15:31:57.385525 kernel: psci: probing for conduit method from ACPI. Feb 13 15:31:57.385531 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:31:57.385538 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:31:57.385545 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 13 15:31:57.385553 kernel: psci: SMC Calling Convention v1.4 Feb 13 15:31:57.385560 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 13 15:31:57.385566 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Feb 13 15:31:57.385573 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:31:57.385580 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:31:57.385587 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:31:57.385594 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:31:57.385600 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:31:57.385607 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:31:57.385614 kernel: CPU features: detected: Spectre-BHB Feb 13 15:31:57.385620 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:31:57.385629 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:31:57.385635 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:31:57.385642 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 13 15:31:57.385649 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:31:57.385655 kernel: alternatives: applying boot alternatives Feb 13 15:31:57.385663 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:31:57.385671 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:31:57.385678 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:31:57.385684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:31:57.385691 kernel: Fallback order for Node 0: 0 Feb 13 15:31:57.385697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 13 15:31:57.385706 kernel: Policy zone: Normal Feb 13 15:31:57.385718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:31:57.385726 kernel: software IO TLB: area num 2. Feb 13 15:31:57.385732 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Feb 13 15:31:57.385740 kernel: Memory: 3983656K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:31:57.385746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:31:57.385754 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:31:57.385761 kernel: rcu: RCU event tracing is enabled. Feb 13 15:31:57.385768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:31:57.385775 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:31:57.385782 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:31:57.385791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:31:57.385798 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:31:57.385804 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:31:57.385811 kernel: GICv3: 960 SPIs implemented Feb 13 15:31:57.385818 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:31:57.385824 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:31:57.385831 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:31:57.385837 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 13 15:31:57.385844 kernel: ITS: No ITS available, not enabling LPIs Feb 13 15:31:57.385851 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:31:57.385857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:31:57.385867 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:31:57.385875 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:31:57.385882 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:31:57.385892 kernel: Console: colour dummy device 80x25 Feb 13 15:31:57.385900 kernel: printk: console [tty1] enabled Feb 13 15:31:57.385906 kernel: ACPI: Core revision 20230628 Feb 13 15:31:57.385913 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:31:57.385921 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:31:57.385927 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:31:57.385934 kernel: landlock: Up and running. Feb 13 15:31:57.385943 kernel: SELinux: Initializing. Feb 13 15:31:57.385950 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:31:57.385957 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:31:57.385964 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:31:57.385971 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:31:57.385978 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 13 15:31:57.385985 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Feb 13 15:31:57.385999 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:31:57.386006 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:31:57.386014 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:31:57.386021 kernel: Remapping and enabling EFI services. Feb 13 15:31:57.386028 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:31:57.386037 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:31:57.386045 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 13 15:31:57.386052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:31:57.386059 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:31:57.386066 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:31:57.386075 kernel: SMP: Total of 2 processors activated. Feb 13 15:31:57.386082 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:31:57.386089 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 13 15:31:57.386097 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:31:57.386104 kernel: CPU features: detected: CRC32 instructions Feb 13 15:31:57.386112 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:31:57.386119 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:31:57.386126 kernel: CPU features: detected: Privileged Access Never Feb 13 15:31:57.386133 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:31:57.386142 kernel: alternatives: applying system-wide alternatives Feb 13 15:31:57.386149 kernel: devtmpfs: initialized Feb 13 15:31:57.386157 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:31:57.386164 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:31:57.386171 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:31:57.386178 kernel: SMBIOS 3.1.0 present. Feb 13 15:31:57.386186 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Feb 13 15:31:57.386193 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:31:57.386200 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:31:57.386210 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:31:57.386218 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:31:57.386225 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:31:57.386233 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Feb 13 15:31:57.386240 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:31:57.386247 kernel: cpuidle: using governor menu Feb 13 15:31:57.386255 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:31:57.386262 kernel: ASID allocator initialised with 32768 entries Feb 13 15:31:57.386269 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:31:57.386278 kernel: Serial: AMBA PL011 UART driver Feb 13 15:31:57.386285 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:31:57.386292 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:31:57.386299 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:31:57.386306 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:31:57.386314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:31:57.386321 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:31:57.386328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:31:57.386335 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:31:57.386344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:31:57.386351 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:31:57.386359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:31:57.386366 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:31:57.386373 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:31:57.386380 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:31:57.386400 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:31:57.386408 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:31:57.386415 kernel: ACPI: Interpreter enabled Feb 13 15:31:57.386426 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:31:57.386433 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:31:57.386440 kernel: printk: console [ttyAMA0] enabled Feb 13 15:31:57.386448 kernel: printk: bootconsole [pl11] disabled Feb 13 15:31:57.386455 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 13 15:31:57.386462 kernel: iommu: Default domain type: Translated Feb 13 15:31:57.386469 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:31:57.386476 kernel: efivars: Registered efivars operations Feb 13 15:31:57.386483 kernel: vgaarb: loaded Feb 13 15:31:57.386492 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:31:57.386499 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:31:57.386507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:31:57.386514 kernel: pnp: PnP ACPI init Feb 13 15:31:57.386528 kernel: pnp: PnP ACPI: found 0 devices Feb 13 15:31:57.386536 kernel: NET: Registered PF_INET protocol family Feb 13 15:31:57.386543 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:31:57.386550 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:31:57.386557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:31:57.386567 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:31:57.386574 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:31:57.386582 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:31:57.386589 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:31:57.386596 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:31:57.386604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:31:57.386611 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:31:57.386618 kernel: kvm [1]: HYP mode not available Feb 13 15:31:57.386625 kernel: Initialise system trusted keyrings Feb 13 15:31:57.386637 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:31:57.386644 kernel: Key type asymmetric registered Feb 13 15:31:57.386651 kernel: Asymmetric key parser 'x509' registered Feb 13 15:31:57.386658 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:31:57.386668 kernel: io scheduler mq-deadline registered Feb 13 15:31:57.386675 kernel: io scheduler kyber registered Feb 13 15:31:57.386682 kernel: io scheduler bfq registered Feb 13 15:31:57.386690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:31:57.386700 kernel: thunder_xcv, ver 1.0 Feb 13 15:31:57.386709 kernel: thunder_bgx, ver 1.0 Feb 13 15:31:57.386716 kernel: nicpf, ver 1.0 Feb 13 15:31:57.386723 kernel: nicvf, ver 1.0 Feb 13 15:31:57.386916 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:31:57.387007 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:31:56 UTC (1739460716) Feb 13 15:31:57.387017 kernel: efifb: probing for efifb Feb 13 15:31:57.387025 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:31:57.387032 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:31:57.387042 kernel: efifb: scrolling: redraw Feb 13 15:31:57.387050 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:31:57.387057 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:31:57.387064 kernel: fb0: EFI VGA frame buffer device Feb 13 15:31:57.387072 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 13 15:31:57.387079 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:31:57.387086 kernel: No ACPI PMU IRQ for CPU0 Feb 13 15:31:57.387093 kernel: No ACPI PMU IRQ for CPU1 Feb 13 15:31:57.387101 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 13 15:31:57.387110 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:31:57.387117 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:31:57.387124 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:31:57.387131 kernel: Segment Routing with IPv6 Feb 13 15:31:57.387138 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:31:57.387146 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:31:57.387153 kernel: Key type dns_resolver registered Feb 13 15:31:57.387160 kernel: registered taskstats version 1 Feb 13 15:31:57.387167 kernel: Loading compiled-in X.509 certificates Feb 13 15:31:57.387176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:31:57.387183 kernel: Key type .fscrypt registered Feb 13 15:31:57.387190 kernel: Key type fscrypt-provisioning registered Feb 13 15:31:57.387198 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:31:57.387205 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:31:57.387212 kernel: ima: No architecture policies found Feb 13 15:31:57.387219 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:31:57.387226 kernel: clk: Disabling unused clocks Feb 13 15:31:57.387233 kernel: Freeing unused kernel memory: 38336K Feb 13 15:31:57.387242 kernel: Run /init as init process Feb 13 15:31:57.387249 kernel: with arguments: Feb 13 15:31:57.387256 kernel: /init Feb 13 15:31:57.387263 kernel: with environment: Feb 13 15:31:57.387270 kernel: HOME=/ Feb 13 15:31:57.387277 kernel: TERM=linux Feb 13 15:31:57.387284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:31:57.387293 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:31:57.387306 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:31:57.387314 systemd[1]: Detected virtualization microsoft. Feb 13 15:31:57.387322 systemd[1]: Detected architecture arm64. Feb 13 15:31:57.387329 systemd[1]: Running in initrd. Feb 13 15:31:57.387337 systemd[1]: No hostname configured, using default hostname. Feb 13 15:31:57.387345 systemd[1]: Hostname set to . Feb 13 15:31:57.387353 systemd[1]: Initializing machine ID from random generator. Feb 13 15:31:57.387360 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:31:57.387370 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:31:57.387378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:31:57.389449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:31:57.389469 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:31:57.389478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:31:57.389488 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:31:57.389498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:31:57.389513 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:31:57.389521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:31:57.389529 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:31:57.389537 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:31:57.389545 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:31:57.389552 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:31:57.389560 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:31:57.389568 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:31:57.389578 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:31:57.389586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:31:57.389593 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:31:57.389601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:31:57.389609 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:31:57.389617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:31:57.389624 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:31:57.389632 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:31:57.389640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:31:57.389649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:31:57.389657 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:31:57.389665 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:31:57.389673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:31:57.389716 systemd-journald[218]: Collecting audit messages is disabled. Feb 13 15:31:57.389738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:57.389748 systemd-journald[218]: Journal started Feb 13 15:31:57.389767 systemd-journald[218]: Runtime Journal (/run/log/journal/ed7866f8562146daaf69a411254030f9) is 8M, max 78.5M, 70.5M free. Feb 13 15:31:57.402581 systemd-modules-load[220]: Inserted module 'overlay' Feb 13 15:31:57.417843 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:31:57.424816 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:31:57.468732 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:31:57.468763 kernel: Bridge firewalling registered Feb 13 15:31:57.460466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:31:57.465602 systemd-modules-load[220]: Inserted module 'br_netfilter' Feb 13 15:31:57.477014 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:31:57.491094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:31:57.505252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:57.538790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:57.553317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:31:57.565851 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:31:57.593638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:31:57.615874 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:57.625102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:31:57.641437 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:31:57.663252 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:31:57.685564 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:31:57.699669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:31:57.716755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:31:57.735112 dracut-cmdline[251]: dracut-dracut-053 Feb 13 15:31:57.735112 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:31:57.756134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:31:57.820654 systemd-resolved[256]: Positive Trust Anchors: Feb 13 15:31:57.820676 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:31:57.820707 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:31:57.823651 systemd-resolved[256]: Defaulting to hostname 'linux'. Feb 13 15:31:57.826138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:31:57.839616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:31:57.933419 kernel: SCSI subsystem initialized Feb 13 15:31:57.941412 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:31:57.951412 kernel: iscsi: registered transport (tcp) Feb 13 15:31:57.971323 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:31:57.971375 kernel: QLogic iSCSI HBA Driver Feb 13 15:31:58.012354 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:31:58.035582 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:31:58.067011 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:31:58.067072 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:31:58.073449 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:31:58.124410 kernel: raid6: neonx8 gen() 15755 MB/s Feb 13 15:31:58.144400 kernel: raid6: neonx4 gen() 15820 MB/s Feb 13 15:31:58.164395 kernel: raid6: neonx2 gen() 13287 MB/s Feb 13 15:31:58.185401 kernel: raid6: neonx1 gen() 10535 MB/s Feb 13 15:31:58.205397 kernel: raid6: int64x8 gen() 6792 MB/s Feb 13 15:31:58.225395 kernel: raid6: int64x4 gen() 7347 MB/s Feb 13 15:31:58.246394 kernel: raid6: int64x2 gen() 6115 MB/s Feb 13 15:31:58.271671 kernel: raid6: int64x1 gen() 5062 MB/s Feb 13 15:31:58.271682 kernel: raid6: using algorithm neonx4 gen() 15820 MB/s Feb 13 15:31:58.295747 kernel: raid6: .... xor() 12440 MB/s, rmw enabled Feb 13 15:31:58.295828 kernel: raid6: using neon recovery algorithm Feb 13 15:31:58.308107 kernel: xor: measuring software checksum speed Feb 13 15:31:58.308171 kernel: 8regs : 21647 MB/sec Feb 13 15:31:58.316387 kernel: 32regs : 20277 MB/sec Feb 13 15:31:58.316414 kernel: arm64_neon : 27823 MB/sec Feb 13 15:31:58.320898 kernel: xor: using function: arm64_neon (27823 MB/sec) Feb 13 15:31:58.372411 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:31:58.384319 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:31:58.400534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:31:58.427777 systemd-udevd[439]: Using default interface naming scheme 'v255'. Feb 13 15:31:58.433868 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:31:58.454535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:31:58.482296 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Feb 13 15:31:58.512658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:31:58.531730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:31:58.574243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:31:58.596583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:31:58.620667 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:31:58.638557 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:31:58.649640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:31:58.668802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:31:58.692474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:31:58.717139 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:31:58.747429 kernel: hv_vmbus: Vmbus version:5.3 Feb 13 15:31:58.752127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:31:58.752294 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:58.789517 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:31:58.789541 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:31:58.789551 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 13 15:31:58.770748 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:58.833603 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:31:58.833932 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 13 15:31:58.833947 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:31:58.810946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:58.857277 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:31:58.857306 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:31:58.811187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:58.885578 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:31:58.885604 kernel: scsi host1: storvsc_host_t Feb 13 15:31:58.885794 kernel: scsi host0: storvsc_host_t Feb 13 15:31:58.885888 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:31:58.871668 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:58.907506 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:31:58.910023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:58.947926 kernel: PTP clock support registered Feb 13 15:31:58.947951 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:31:58.947960 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:31:58.947969 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:31:58.947979 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:31:58.934852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:59.131422 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:31:59.120417 systemd-resolved[256]: Clock change detected. Flushing caches. Feb 13 15:31:59.158089 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:31:59.171372 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:31:59.171393 kernel: hv_netvsc 002248b5-c392-0022-48b5-c392002248b5 eth0: VF slot 1 added Feb 13 15:31:59.171540 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:31:59.130128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:31:59.189766 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:31:59.131075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:59.206118 kernel: hv_pci 10571b25-2b8f-4404-a74d-bdc52038b8da: PCI VMBus probing: Using version 0x10004 Feb 13 15:31:59.315689 kernel: hv_pci 10571b25-2b8f-4404-a74d-bdc52038b8da: PCI host bridge to bus 2b8f:00 Feb 13 15:31:59.315813 kernel: pci_bus 2b8f:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 13 15:31:59.315927 kernel: pci_bus 2b8f:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:31:59.316006 kernel: pci 2b8f:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 13 15:31:59.316106 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:31:59.326660 kernel: pci 2b8f:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:31:59.326797 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:31:59.326891 kernel: pci 2b8f:00:02.0: enabling Extended Tags Feb 13 15:31:59.326977 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:31:59.327067 kernel: pci 2b8f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2b8f:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 13 15:31:59.327150 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:31:59.327232 kernel: pci_bus 2b8f:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:31:59.327356 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:31:59.327448 kernel: pci 2b8f:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 13 15:31:59.327537 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:31:59.327546 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:31:59.158373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:59.182573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:31:59.220900 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:31:59.253195 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:31:59.344545 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:31:59.395212 kernel: mlx5_core 2b8f:00:02.0: enabling device (0000 -> 0002) Feb 13 15:31:59.615702 kernel: mlx5_core 2b8f:00:02.0: firmware version: 16.30.1284 Feb 13 15:31:59.615831 kernel: hv_netvsc 002248b5-c392-0022-48b5-c392002248b5 eth0: VF registering: eth1 Feb 13 15:31:59.615925 kernel: mlx5_core 2b8f:00:02.0 eth1: joined to eth0 Feb 13 15:31:59.616020 kernel: mlx5_core 2b8f:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Feb 13 15:31:59.623328 kernel: mlx5_core 2b8f:00:02.0 enP11151s1: renamed from eth1 Feb 13 15:31:59.917451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:32:00.004290 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (486) Feb 13 15:32:00.021433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:32:00.085937 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:32:00.118818 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (502) Feb 13 15:32:00.135475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:32:00.142839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:32:00.171524 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:32:00.198319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:32:00.206328 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:32:01.215314 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:32:01.215813 disk-uuid[604]: The operation has completed successfully. Feb 13 15:32:01.284603 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:32:01.289577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:32:01.340453 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:32:01.356869 sh[690]: Success Feb 13 15:32:01.412354 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:32:01.654915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:32:01.681461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:32:01.687036 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:32:01.722291 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:32:01.722348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:01.722360 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:32:01.735147 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:32:01.739461 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:32:02.234060 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:32:02.239576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:32:02.264566 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:32:02.273558 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:32:02.311293 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:32:02.311338 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:02.316030 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:32:02.343299 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:32:02.353515 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:32:02.365636 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:32:02.371665 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:32:02.386873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:32:02.443564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:32:02.463433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:32:02.496104 systemd-networkd[875]: lo: Link UP Feb 13 15:32:02.496114 systemd-networkd[875]: lo: Gained carrier Feb 13 15:32:02.499180 systemd-networkd[875]: Enumeration completed Feb 13 15:32:02.499653 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:32:02.506255 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:02.506259 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:02.506793 systemd[1]: Reached target network.target - Network. Feb 13 15:32:02.596300 kernel: mlx5_core 2b8f:00:02.0 enP11151s1: Link up Feb 13 15:32:02.635524 kernel: hv_netvsc 002248b5-c392-0022-48b5-c392002248b5 eth0: Data path switched to VF: enP11151s1 Feb 13 15:32:02.635151 systemd-networkd[875]: enP11151s1: Link UP Feb 13 15:32:02.635237 systemd-networkd[875]: eth0: Link UP Feb 13 15:32:02.635364 systemd-networkd[875]: eth0: Gained carrier Feb 13 15:32:02.635374 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:02.643434 systemd-networkd[875]: enP11151s1: Gained carrier Feb 13 15:32:02.668336 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:32:03.781422 systemd-networkd[875]: enP11151s1: Gained IPv6LL Feb 13 15:32:03.842602 ignition[810]: Ignition 2.20.0 Feb 13 15:32:03.842616 ignition[810]: Stage: fetch-offline Feb 13 15:32:03.847801 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:32:03.842656 ignition[810]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:03.842665 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:03.842765 ignition[810]: parsed url from cmdline: "" Feb 13 15:32:03.842768 ignition[810]: no config URL provided Feb 13 15:32:03.842773 ignition[810]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:32:03.875566 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:32:03.842780 ignition[810]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:32:03.842785 ignition[810]: failed to fetch config: resource requires networking Feb 13 15:32:03.842970 ignition[810]: Ignition finished successfully Feb 13 15:32:03.899392 ignition[885]: Ignition 2.20.0 Feb 13 15:32:03.899401 ignition[885]: Stage: fetch Feb 13 15:32:03.899587 ignition[885]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:03.899598 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:03.899690 ignition[885]: parsed url from cmdline: "" Feb 13 15:32:03.899693 ignition[885]: no config URL provided Feb 13 15:32:03.899699 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:32:03.899708 ignition[885]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:32:03.899736 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:32:03.992861 ignition[885]: GET result: OK Feb 13 15:32:03.992884 ignition[885]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 13 15:32:04.017330 ignition[885]: opening config device: "/dev/sr0" Feb 13 15:32:04.017702 ignition[885]: getting drive status for "/dev/sr0" Feb 13 15:32:04.017759 ignition[885]: drive status: OK Feb 13 15:32:04.017795 ignition[885]: mounting config device Feb 13 15:32:04.017802 ignition[885]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure899094743" Feb 13 15:32:04.038290 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2025/02/14 00:00 (1000) Feb 13 15:32:04.038335 ignition[885]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure899094743" Feb 13 15:32:04.038341 ignition[885]: checking for config drive Feb 13 15:32:04.045099 ignition[885]: reading config Feb 13 15:32:04.045463 ignition[885]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure899094743" Feb 13 15:32:04.046261 ignition[885]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure899094743" Feb 13 15:32:04.045953 systemd[1]: tmp-ignition\x2dazure899094743.mount: Deactivated successfully. Feb 13 15:32:04.046334 ignition[885]: config has been read from custom data Feb 13 15:32:04.050690 unknown[885]: fetched base config from "system" Feb 13 15:32:04.046375 ignition[885]: parsing config with SHA512: 26234fcbfe4556af717ea2a6cf674b2b9db039bd0059c64186a6c1aec99ea651eb8cd32ee2bae3e9d9a3aa7c32c79eacdc67f8f96babf94facfd55fec4aef178 Feb 13 15:32:04.050697 unknown[885]: fetched base config from "system" Feb 13 15:32:04.051069 ignition[885]: fetch: fetch complete Feb 13 15:32:04.050703 unknown[885]: fetched user config from "azure" Feb 13 15:32:04.051074 ignition[885]: fetch: fetch passed Feb 13 15:32:04.053949 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:32:04.051113 ignition[885]: Ignition finished successfully Feb 13 15:32:04.086546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:32:04.103147 systemd-networkd[875]: eth0: Gained IPv6LL Feb 13 15:32:04.126670 ignition[893]: Ignition 2.20.0 Feb 13 15:32:04.126677 ignition[893]: Stage: kargs Feb 13 15:32:04.126865 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:04.133830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:32:04.126874 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:04.127765 ignition[893]: kargs: kargs passed Feb 13 15:32:04.127813 ignition[893]: Ignition finished successfully Feb 13 15:32:04.158542 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:32:04.177840 ignition[900]: Ignition 2.20.0 Feb 13 15:32:04.177856 ignition[900]: Stage: disks Feb 13 15:32:04.184391 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:32:04.178045 ignition[900]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:04.191327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:32:04.178055 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:04.202536 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:32:04.179135 ignition[900]: disks: disks passed Feb 13 15:32:04.214628 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:32:04.179183 ignition[900]: Ignition finished successfully Feb 13 15:32:04.226551 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:32:04.240885 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:32:04.266595 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:32:04.355963 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:32:04.363845 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:32:04.382452 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:32:04.441322 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:32:04.441850 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:32:04.447101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:32:04.488358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:32:04.496458 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:32:04.524313 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Feb 13 15:32:04.524340 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:32:04.514522 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:32:04.548119 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:04.548177 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:32:04.541155 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:32:04.541203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:32:04.578559 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:32:04.581088 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:32:04.590862 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:32:04.607601 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:32:05.740432 coreos-metadata[921]: Feb 13 15:32:05.740 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:32:05.748738 coreos-metadata[921]: Feb 13 15:32:05.748 INFO Fetch successful Feb 13 15:32:05.748738 coreos-metadata[921]: Feb 13 15:32:05.748 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:32:05.768669 coreos-metadata[921]: Feb 13 15:32:05.767 INFO Fetch successful Feb 13 15:32:05.768669 coreos-metadata[921]: Feb 13 15:32:05.767 INFO wrote hostname ci-4230.0.1-a-cf53dd3440 to /sysroot/etc/hostname Feb 13 15:32:05.770663 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:32:06.388852 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:32:06.442536 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:32:06.451548 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:32:06.460893 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:32:07.784763 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:32:07.798768 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:32:07.808503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:32:07.832285 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:32:07.819978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:32:07.862674 ignition[1038]: INFO : Ignition 2.20.0 Feb 13 15:32:07.862674 ignition[1038]: INFO : Stage: mount Feb 13 15:32:07.862674 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:07.862674 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:07.897217 ignition[1038]: INFO : mount: mount passed Feb 13 15:32:07.897217 ignition[1038]: INFO : Ignition finished successfully Feb 13 15:32:07.862685 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:32:07.876307 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:32:07.912483 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:32:07.935770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:32:07.984471 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1051) Feb 13 15:32:07.984539 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:32:07.990956 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:32:07.995661 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:32:08.002278 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:32:08.003972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:32:08.032946 ignition[1068]: INFO : Ignition 2.20.0 Feb 13 15:32:08.032946 ignition[1068]: INFO : Stage: files Feb 13 15:32:08.032946 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:08.032946 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:08.054691 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:32:08.054691 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:32:08.054691 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:32:08.190259 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:32:08.198235 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:32:08.198235 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:32:08.191789 unknown[1068]: wrote ssh authorized keys file for user: core Feb 13 15:32:08.228916 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 15:32:08.241225 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 15:32:08.430881 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:32:08.565260 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 15:32:08.576587 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:32:08.576587 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:32:09.036567 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:32:09.158329 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 15:32:09.168394 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 15:32:09.600991 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:32:09.880934 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 15:32:09.880934 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:32:09.902416 ignition[1068]: INFO : files: files passed Feb 13 15:32:09.902416 ignition[1068]: INFO : Ignition finished successfully Feb 13 15:32:09.896062 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:32:09.933568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:32:09.949517 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:32:09.973246 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:32:10.079996 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:10.079996 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:09.973411 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:32:10.105556 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:32:10.007892 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:32:10.017190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:32:10.041550 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:32:10.091933 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:32:10.092029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:32:10.114093 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:32:10.130772 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:32:10.145399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:32:10.176438 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:32:10.201149 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:32:10.232568 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:32:10.253447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:10.261008 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:10.275962 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:32:10.288625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:32:10.288774 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:32:10.306506 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:32:10.313539 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:32:10.328185 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:32:10.340584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:32:10.352708 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:32:10.365327 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:32:10.377449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:32:10.393185 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:32:10.405221 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:32:10.417883 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:32:10.429222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:32:10.429366 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:32:10.446501 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:10.453511 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:10.465737 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:32:10.471085 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:10.480318 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:32:10.480469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:32:10.499068 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:32:10.499218 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:32:10.507573 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:32:10.507684 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:32:10.532856 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:32:10.532995 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:32:10.635532 ignition[1121]: INFO : Ignition 2.20.0 Feb 13 15:32:10.635532 ignition[1121]: INFO : Stage: umount Feb 13 15:32:10.635532 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:32:10.635532 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:32:10.635532 ignition[1121]: INFO : umount: umount passed Feb 13 15:32:10.635532 ignition[1121]: INFO : Ignition finished successfully Feb 13 15:32:10.559570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:32:10.577580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:32:10.588647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:32:10.588872 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:10.608805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:32:10.609052 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:32:10.629156 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:32:10.629313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:32:10.642637 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:32:10.642770 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:32:10.653352 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:32:10.653461 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:32:10.664245 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:32:10.664327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:32:10.676626 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:32:10.676690 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:32:10.690528 systemd[1]: Stopped target network.target - Network. Feb 13 15:32:10.701306 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:32:10.701398 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:32:10.714143 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:32:10.724097 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:32:10.732314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:10.743004 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:32:10.756132 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:32:10.767995 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:32:10.768063 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:32:10.782773 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:32:10.782826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:32:10.796439 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:32:10.796522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:32:10.809639 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:32:10.809692 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:32:10.825166 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:32:10.840232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:32:11.091145 kernel: hv_netvsc 002248b5-c392-0022-48b5-c392002248b5 eth0: Data path switched from VF: enP11151s1 Feb 13 15:32:10.858026 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:32:10.858140 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:32:10.877989 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:32:10.878340 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:32:10.878473 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:32:10.895925 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:32:10.896933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:32:10.897005 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:10.926464 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:32:10.938387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:32:10.938486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:32:10.953692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:32:10.953751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:10.968900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:32:10.968971 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:10.975224 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:32:10.975294 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:10.994143 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:11.002206 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:32:11.002305 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:32:11.022313 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:32:11.026524 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:32:11.026934 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:11.036924 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:32:11.037014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:32:11.043609 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:32:11.043693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:11.054756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:32:11.054794 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:11.067283 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:32:11.067390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:32:11.102329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:32:11.102399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:32:11.121804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:32:11.121878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:32:11.142008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:32:11.142085 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:32:11.177552 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:32:11.194574 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:32:11.194670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:11.219685 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:32:11.219757 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:11.228063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:32:11.228128 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:11.241640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:11.241690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:11.524117 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Feb 13 15:32:11.264728 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:32:11.264804 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:32:11.265180 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:32:11.265320 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:32:11.277357 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:32:11.279292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:32:11.295657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:32:11.339604 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:32:11.367315 systemd[1]: Switching root. Feb 13 15:32:11.586223 systemd-journald[218]: Journal stopped Feb 13 15:32:24.287869 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:32:24.287895 kernel: SELinux: policy capability open_perms=1 Feb 13 15:32:24.287905 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:32:24.287913 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:32:24.287923 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:32:24.287930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:32:24.287939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:32:24.287947 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:32:24.287954 kernel: audit: type=1403 audit(1739460732.817:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:32:24.287964 systemd[1]: Successfully loaded SELinux policy in 189.535ms. Feb 13 15:32:24.287975 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.345ms. Feb 13 15:32:24.287985 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:32:24.287993 systemd[1]: Detected virtualization microsoft. Feb 13 15:32:24.288002 systemd[1]: Detected architecture arm64. Feb 13 15:32:24.288010 systemd[1]: Detected first boot. Feb 13 15:32:24.288021 systemd[1]: Hostname set to . Feb 13 15:32:24.288031 systemd[1]: Initializing machine ID from random generator. Feb 13 15:32:24.288039 zram_generator::config[1165]: No configuration found. Feb 13 15:32:24.288049 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:32:24.288057 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:32:24.288067 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:32:24.288077 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:32:24.288087 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:32:24.288095 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:32:24.288105 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:32:24.288114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:32:24.288123 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:32:24.288131 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:32:24.288141 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:32:24.288154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:32:24.288163 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:32:24.288172 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:32:24.288182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:32:24.288191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:32:24.288200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:32:24.288209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:32:24.288218 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:32:24.288228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:32:24.288237 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:32:24.288246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:32:24.288258 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:32:24.288284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:32:24.288295 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:32:24.288305 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:32:24.288315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:32:24.288327 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:32:24.288337 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:32:24.288346 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:32:24.288355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:32:24.288364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:32:24.288373 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:32:24.288385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:32:24.288394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:32:24.288404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:32:24.288413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:32:24.288422 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:32:24.288431 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:32:24.288441 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:32:24.288451 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:32:24.288461 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:32:24.288470 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:32:24.288480 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:32:24.288489 systemd[1]: Reached target machines.target - Containers. Feb 13 15:32:24.288499 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:32:24.288508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:24.288519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:32:24.288530 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:32:24.288539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:24.288548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:32:24.288558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:24.288567 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:32:24.288576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:24.288586 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:32:24.288595 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:32:24.288606 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:32:24.288616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:32:24.288626 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:32:24.288636 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:32:24.288645 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:32:24.288654 kernel: loop: module loaded Feb 13 15:32:24.288662 kernel: fuse: init (API version 7.39) Feb 13 15:32:24.288671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:32:24.288681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:32:24.288692 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:32:24.288701 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:32:24.288711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:32:24.288719 kernel: ACPI: bus type drm_connector registered Feb 13 15:32:24.288728 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:32:24.288737 systemd[1]: Stopped verity-setup.service. Feb 13 15:32:24.288747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:32:24.288756 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:32:24.288767 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:32:24.288776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:32:24.288786 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:32:24.288799 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:32:24.288808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:32:24.288817 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:32:24.288826 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:32:24.288859 systemd-journald[1245]: Collecting audit messages is disabled. Feb 13 15:32:24.288881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:24.288892 systemd-journald[1245]: Journal started Feb 13 15:32:24.288912 systemd-journald[1245]: Runtime Journal (/run/log/journal/d7dce1c09c2b4e1faadfb201411827a0) is 8M, max 78.5M, 70.5M free. Feb 13 15:32:22.660826 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:32:22.669215 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:32:22.669621 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:32:22.669982 systemd[1]: systemd-journald.service: Consumed 3.624s CPU time. Feb 13 15:32:24.299054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:24.319117 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:32:24.320227 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:32:24.320464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:32:24.327212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:24.327428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:24.335330 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:32:24.335517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:32:24.345821 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:24.346030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:24.354221 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:32:24.361693 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:32:24.376765 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:32:24.391436 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:32:24.401479 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:32:24.410870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:32:24.415445 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:32:24.423556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:32:24.431003 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:32:24.439030 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:32:24.456524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:32:24.462964 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:32:24.472340 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:32:24.472409 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:32:24.480229 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:32:24.493486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:32:24.501466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:32:24.507717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:24.721524 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:32:24.742583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:32:24.759383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:32:24.768559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:32:24.779557 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:32:24.793992 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:32:24.802146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:32:24.809535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:32:24.824505 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:32:24.841082 udevadm[1313]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:32:25.080069 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:32:25.088164 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:32:25.099058 systemd-journald[1245]: Time spent on flushing to /var/log/journal/d7dce1c09c2b4e1faadfb201411827a0 is 16.985ms for 938 entries. Feb 13 15:32:25.099058 systemd-journald[1245]: System Journal (/var/log/journal/d7dce1c09c2b4e1faadfb201411827a0) is 8M, max 2.6G, 2.6G free. Feb 13 15:32:25.629238 systemd-journald[1245]: Received client request to flush runtime journal. Feb 13 15:32:25.629360 kernel: loop0: detected capacity change from 0 to 113512 Feb 13 15:32:25.108595 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:32:25.127918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:32:25.198156 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 13 15:32:25.198169 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 13 15:32:25.203156 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:32:25.217491 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:32:25.631371 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:32:25.717009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:32:25.736683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:32:25.754589 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Feb 13 15:32:25.755044 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Feb 13 15:32:25.760509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:32:26.032737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:32:26.037342 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:32:28.311317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:32:28.352294 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 15:32:28.721323 kernel: loop2: detected capacity change from 0 to 123192 Feb 13 15:32:29.888000 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:32:29.909503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:32:29.934381 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Feb 13 15:32:30.449303 kernel: loop3: detected capacity change from 0 to 28720 Feb 13 15:32:30.494795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:32:30.513876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:32:30.566942 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:32:30.662848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:32:30.736526 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:32:30.750458 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:32:30.781745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:30.794675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:30.794901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:30.816472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:30.852294 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:32:30.862472 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:32:30.862593 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 13 15:32:30.894710 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:32:30.895146 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:32:30.895179 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:32:30.908470 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:32:30.910947 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:32:30.919573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:32:30.919788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:30.927816 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:32:30.939559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:32:30.989051 systemd-networkd[1348]: lo: Link UP Feb 13 15:32:30.989060 systemd-networkd[1348]: lo: Gained carrier Feb 13 15:32:30.991591 systemd-networkd[1348]: Enumeration completed Feb 13 15:32:30.991832 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:32:30.992706 systemd-networkd[1348]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:30.992711 systemd-networkd[1348]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:31.006597 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:32:31.026601 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:32:31.047321 kernel: mlx5_core 2b8f:00:02.0 enP11151s1: Link up Feb 13 15:32:31.063330 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1340) Feb 13 15:32:31.084360 kernel: hv_netvsc 002248b5-c392-0022-48b5-c392002248b5 eth0: Data path switched to VF: enP11151s1 Feb 13 15:32:31.086569 systemd-networkd[1348]: enP11151s1: Link UP Feb 13 15:32:31.086726 systemd-networkd[1348]: eth0: Link UP Feb 13 15:32:31.086731 systemd-networkd[1348]: eth0: Gained carrier Feb 13 15:32:31.087625 systemd-networkd[1348]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:31.093682 systemd-networkd[1348]: enP11151s1: Gained carrier Feb 13 15:32:31.111352 systemd-networkd[1348]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:32:31.119019 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:32:31.164336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:32:31.180880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:32:31.189358 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:32:31.204964 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:32:31.273344 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 15:32:31.284302 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 15:32:31.294291 kernel: loop6: detected capacity change from 0 to 123192 Feb 13 15:32:31.303298 kernel: loop7: detected capacity change from 0 to 28720 Feb 13 15:32:31.306883 (sd-merge)[1457]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:32:31.307389 (sd-merge)[1457]: Merged extensions into '/usr'. Feb 13 15:32:31.311360 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:32:31.311381 systemd[1]: Reloading... Feb 13 15:32:31.400323 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:32:31.411309 zram_generator::config[1491]: No configuration found. Feb 13 15:32:31.574967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:31.688383 systemd[1]: Reloading finished in 376 ms. Feb 13 15:32:31.707668 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:32:31.718326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:32:31.726481 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:32:31.734717 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:32:31.749413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:32:31.761811 systemd[1]: Starting ensure-sysext.service... Feb 13 15:32:31.770497 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:32:31.784306 lvm[1551]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:32:31.784552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:32:31.810323 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:32:31.822692 systemd[1]: Reload requested from client PID 1550 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:32:31.822713 systemd[1]: Reloading... Feb 13 15:32:31.828353 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:32:31.829348 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:32:31.830080 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:32:31.830664 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 15:32:31.830872 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 15:32:31.835605 systemd-tmpfiles[1552]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:32:31.835800 systemd-tmpfiles[1552]: Skipping /boot Feb 13 15:32:31.849802 systemd-tmpfiles[1552]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:32:31.849817 systemd-tmpfiles[1552]: Skipping /boot Feb 13 15:32:31.924367 zram_generator::config[1585]: No configuration found. Feb 13 15:32:32.052150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:32.133404 systemd-networkd[1348]: enP11151s1: Gained IPv6LL Feb 13 15:32:32.167745 systemd[1]: Reloading finished in 344 ms. Feb 13 15:32:32.197345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:32:32.220607 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:32:32.228130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:32:32.237530 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:32:32.253453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:32:32.263460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:32:32.278369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:32.280072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:32.303667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:32.325756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:32.334091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:32.334281 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:32:32.337285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:32.337512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:32.346126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:32.346398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:32.355183 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:32.355589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:32.368804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:32.376666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:32.387467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:32.396768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:32.403915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:32.404149 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:32:32.406303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:32.408799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:32.417504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:32.417892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:32.427707 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:32.428100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:32.436306 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:32:32.447770 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:32:32.465381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:32:32.471636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:32:32.481044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:32:32.490416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:32:32.501632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:32:32.508149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:32:32.508516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:32:32.508796 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:32:32.517530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:32:32.517891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:32:32.526152 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:32:32.526555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:32:32.534059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:32:32.534450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:32:32.542840 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:32:32.543153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:32:32.545566 systemd-resolved[1645]: Positive Trust Anchors: Feb 13 15:32:32.545582 systemd-resolved[1645]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:32:32.545614 systemd-resolved[1645]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:32:32.554953 systemd[1]: Finished ensure-sysext.service. Feb 13 15:32:32.563864 systemd-resolved[1645]: Using system hostname 'ci-4230.0.1-a-cf53dd3440'. Feb 13 15:32:32.565936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:32:32.573969 systemd[1]: Reached target network.target - Network. Feb 13 15:32:32.581239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:32:32.593455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:32:32.593547 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:32:32.635706 augenrules[1690]: No rules Feb 13 15:32:32.636427 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:32:32.636860 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:32:33.093438 systemd-networkd[1348]: eth0: Gained IPv6LL Feb 13 15:32:33.095655 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:32:33.103985 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:32:33.274969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:32:33.283493 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:32:38.380940 ldconfig[1304]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:32:38.396483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:32:38.413519 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:32:38.422192 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:32:38.430310 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:32:38.436368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:32:38.443215 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:32:38.451096 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:32:38.457998 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:32:38.466169 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:32:38.473500 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:32:38.473543 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:32:38.479358 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:32:38.516400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:32:38.524910 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:32:38.533042 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:32:38.540547 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:32:38.547818 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:32:38.563199 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:32:38.569574 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:32:38.577161 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:32:38.583523 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:32:38.589551 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:32:38.595803 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:32:38.595838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:32:38.605394 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:32:38.613522 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:32:38.623576 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:32:38.631572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:32:38.647748 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:32:38.655551 (chronyd)[1703]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:32:38.658815 jq[1710]: false Feb 13 15:32:38.659463 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:32:38.667396 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:32:38.667457 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:32:38.669303 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:32:38.676128 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:32:38.680930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:38.689994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:32:38.701139 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:32:38.709447 KVP[1712]: KVP starting; pid is:1712 Feb 13 15:32:38.713149 chronyd[1719]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:32:38.713629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:32:38.723615 KVP[1712]: KVP LIC Version: 3.1 Feb 13 15:32:38.725012 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:32:38.726508 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:32:38.738802 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:32:38.751108 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:32:38.759610 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:32:38.759925 chronyd[1719]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:32:38.760904 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:32:38.760140 chronyd[1719]: Loaded seccomp filter (level 2) Feb 13 15:32:38.772178 extend-filesystems[1711]: Found loop4 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found loop5 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found loop6 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found loop7 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda1 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda2 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda3 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found usr Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda4 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda6 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda7 Feb 13 15:32:38.772178 extend-filesystems[1711]: Found sda9 Feb 13 15:32:38.772178 extend-filesystems[1711]: Checking size of /dev/sda9 Feb 13 15:32:38.769512 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:32:38.794322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:32:38.805947 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:32:38.878355 jq[1732]: true Feb 13 15:32:38.849658 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:32:38.849874 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:32:38.855658 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:32:38.857317 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:32:38.881992 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:32:38.882202 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:32:38.900050 (ntainerd)[1745]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:32:38.911290 extend-filesystems[1711]: Old size kept for /dev/sda9 Feb 13 15:32:38.911290 extend-filesystems[1711]: Found sr0 Feb 13 15:32:38.927625 dbus-daemon[1706]: [system] SELinux support is enabled Feb 13 15:32:38.921933 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:32:38.922869 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:32:38.954061 jq[1742]: true Feb 13 15:32:38.935070 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:32:38.946699 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:32:38.969994 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:32:38.970060 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:32:38.982321 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:32:38.990142 update_engine[1730]: I20250213 15:32:38.989398 1730 main.cc:92] Flatcar Update Engine starting Feb 13 15:32:38.982923 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:32:39.000475 update_engine[1730]: I20250213 15:32:38.994014 1730 update_check_scheduler.cc:74] Next update check in 7m10s Feb 13 15:32:38.994679 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:32:39.001187 coreos-metadata[1705]: Feb 13 15:32:39.001 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:32:39.011955 systemd-logind[1726]: New seat seat0. Feb 13 15:32:39.014695 systemd-logind[1726]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 13 15:32:39.024908 coreos-metadata[1705]: Feb 13 15:32:39.017 INFO Fetch successful Feb 13 15:32:39.024908 coreos-metadata[1705]: Feb 13 15:32:39.017 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:32:39.024908 coreos-metadata[1705]: Feb 13 15:32:39.024 INFO Fetch successful Feb 13 15:32:39.017598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:32:39.025122 coreos-metadata[1705]: Feb 13 15:32:39.025 INFO Fetching http://168.63.129.16/machine/5819e7c3-1d9a-4e0d-898d-544ebbfc5d27/a122df97%2D69a4%2D4e0b%2Da449%2D534a688f9a09.%5Fci%2D4230.0.1%2Da%2Dcf53dd3440?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:32:39.029036 coreos-metadata[1705]: Feb 13 15:32:39.028 INFO Fetch successful Feb 13 15:32:39.029705 coreos-metadata[1705]: Feb 13 15:32:39.029 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:32:39.030165 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:32:39.042924 coreos-metadata[1705]: Feb 13 15:32:39.041 INFO Fetch successful Feb 13 15:32:39.053835 tar[1739]: linux-arm64/LICENSE Feb 13 15:32:39.053835 tar[1739]: linux-arm64/helm Feb 13 15:32:39.114303 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1775) Feb 13 15:32:39.124852 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:32:39.134866 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:32:39.223765 bash[1807]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:32:39.224354 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:32:39.240840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:32:39.543418 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:32:39.807609 tar[1739]: linux-arm64/README.md Feb 13 15:32:39.836878 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:32:39.852864 containerd[1745]: time="2025-02-13T15:32:39.852610480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:32:39.910747 containerd[1745]: time="2025-02-13T15:32:39.910678320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.916485 containerd[1745]: time="2025-02-13T15:32:39.916418840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:39.916485 containerd[1745]: time="2025-02-13T15:32:39.916474080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:32:39.916485 containerd[1745]: time="2025-02-13T15:32:39.916497240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:32:39.916888 containerd[1745]: time="2025-02-13T15:32:39.916690440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:32:39.916888 containerd[1745]: time="2025-02-13T15:32:39.916718600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.916888 containerd[1745]: time="2025-02-13T15:32:39.916804000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:39.916888 containerd[1745]: time="2025-02-13T15:32:39.916816800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917301 containerd[1745]: time="2025-02-13T15:32:39.917088760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917301 containerd[1745]: time="2025-02-13T15:32:39.917114000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917301 containerd[1745]: time="2025-02-13T15:32:39.917129400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917301 containerd[1745]: time="2025-02-13T15:32:39.917139520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917301 containerd[1745]: time="2025-02-13T15:32:39.917225440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917497 containerd[1745]: time="2025-02-13T15:32:39.917470480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917647 containerd[1745]: time="2025-02-13T15:32:39.917624640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:32:39.917647 containerd[1745]: time="2025-02-13T15:32:39.917645000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:32:39.918329 containerd[1745]: time="2025-02-13T15:32:39.917721600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:32:39.918329 containerd[1745]: time="2025-02-13T15:32:39.917773360Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:32:39.932613 containerd[1745]: time="2025-02-13T15:32:39.932560400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:32:39.932743 containerd[1745]: time="2025-02-13T15:32:39.932650120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:32:39.932743 containerd[1745]: time="2025-02-13T15:32:39.932668040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:32:39.932743 containerd[1745]: time="2025-02-13T15:32:39.932685640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:32:39.932743 containerd[1745]: time="2025-02-13T15:32:39.932703160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.933577840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934061240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934242880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934294000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934319600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934340840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934359200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934377840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934398040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934419400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934437760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934455280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934472080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:32:39.935827 containerd[1745]: time="2025-02-13T15:32:39.934499520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.936221 containerd[1745]: time="2025-02-13T15:32:39.934527320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.936221 containerd[1745]: time="2025-02-13T15:32:39.934547120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.936221 containerd[1745]: time="2025-02-13T15:32:39.934574680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.936221 containerd[1745]: time="2025-02-13T15:32:39.934587840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.937163 containerd[1745]: time="2025-02-13T15:32:39.934606200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.937365 containerd[1745]: time="2025-02-13T15:32:39.937342720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938253720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938300360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938323960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938338000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938361880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938377400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938393040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:32:39.938436 containerd[1745]: time="2025-02-13T15:32:39.938420240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938819 containerd[1745]: time="2025-02-13T15:32:39.938671760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.938819 containerd[1745]: time="2025-02-13T15:32:39.938693080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:32:39.938819 containerd[1745]: time="2025-02-13T15:32:39.938770480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:32:39.938819 containerd[1745]: time="2025-02-13T15:32:39.938790880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:32:39.938930 containerd[1745]: time="2025-02-13T15:32:39.938802840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:32:39.938989 containerd[1745]: time="2025-02-13T15:32:39.938974160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:32:39.939051 containerd[1745]: time="2025-02-13T15:32:39.939039600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.939172 containerd[1745]: time="2025-02-13T15:32:39.939157560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:32:39.939244 containerd[1745]: time="2025-02-13T15:32:39.939232000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:32:39.939321 containerd[1745]: time="2025-02-13T15:32:39.939308400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:32:39.940857 containerd[1745]: time="2025-02-13T15:32:39.940707480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:32:39.941447 containerd[1745]: time="2025-02-13T15:32:39.941028840Z" level=info msg="Connect containerd service" Feb 13 15:32:39.941447 containerd[1745]: time="2025-02-13T15:32:39.941384840Z" level=info msg="using legacy CRI server" Feb 13 15:32:39.941447 containerd[1745]: time="2025-02-13T15:32:39.941401520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:32:39.942842 containerd[1745]: time="2025-02-13T15:32:39.942423840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944196200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944296720Z" level=info msg="Start subscribing containerd event" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944359880Z" level=info msg="Start recovering state" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944431680Z" level=info msg="Start event monitor" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944443080Z" level=info msg="Start snapshots syncer" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944454960Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:32:39.944577 containerd[1745]: time="2025-02-13T15:32:39.944463840Z" level=info msg="Start streaming server" Feb 13 15:32:39.947888 containerd[1745]: time="2025-02-13T15:32:39.947853640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:32:39.956309 containerd[1745]: time="2025-02-13T15:32:39.949103280Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:32:39.956309 containerd[1745]: time="2025-02-13T15:32:39.955479560Z" level=info msg="containerd successfully booted in 0.103784s" Feb 13 15:32:39.949318 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:32:40.044577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:40.051459 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:32:40.450000 kubelet[1875]: E0213 15:32:40.449944 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:32:40.453501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:32:40.453787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:32:40.455548 systemd[1]: kubelet.service: Consumed 731ms CPU time, 249.3M memory peak. Feb 13 15:32:40.928430 sshd_keygen[1737]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:32:40.948253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:32:40.968794 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:32:40.975560 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:32:40.983090 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:32:40.983498 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:32:40.999973 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:32:41.007852 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:32:41.090221 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:32:41.102679 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:32:41.109823 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:32:41.117918 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:32:41.124081 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:32:41.130792 systemd[1]: Startup finished in 707ms (kernel) + 15.724s (initrd) + 28.500s (userspace) = 44.932s. Feb 13 15:32:41.696160 login[1904]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 15:32:41.697763 login[1905]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:41.711338 systemd-logind[1726]: New session 2 of user core. Feb 13 15:32:41.712717 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:32:41.722573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:32:41.779649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:32:41.793949 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:32:41.798126 (systemd)[1912]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:32:41.801793 systemd-logind[1726]: New session c1 of user core. Feb 13 15:32:42.016200 systemd[1912]: Queued start job for default target default.target. Feb 13 15:32:42.021327 systemd[1912]: Created slice app.slice - User Application Slice. Feb 13 15:32:42.021368 systemd[1912]: Reached target paths.target - Paths. Feb 13 15:32:42.021416 systemd[1912]: Reached target timers.target - Timers. Feb 13 15:32:42.022924 systemd[1912]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:32:42.034212 systemd[1912]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:32:42.034574 systemd[1912]: Reached target sockets.target - Sockets. Feb 13 15:32:42.034639 systemd[1912]: Reached target basic.target - Basic System. Feb 13 15:32:42.034675 systemd[1912]: Reached target default.target - Main User Target. Feb 13 15:32:42.034706 systemd[1912]: Startup finished in 224ms. Feb 13 15:32:42.035140 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:32:42.042450 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:32:42.697020 login[1904]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:42.701662 systemd-logind[1726]: New session 1 of user core. Feb 13 15:32:42.707455 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:32:44.924640 waagent[1901]: 2025-02-13T15:32:44.924532Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:32:44.931896 waagent[1901]: 2025-02-13T15:32:44.931805Z INFO Daemon Daemon OS: flatcar 4230.0.1 Feb 13 15:32:44.938406 waagent[1901]: 2025-02-13T15:32:44.938315Z INFO Daemon Daemon Python: 3.11.11 Feb 13 15:32:44.943217 waagent[1901]: 2025-02-13T15:32:44.943136Z INFO Daemon Daemon Run daemon Feb 13 15:32:44.947542 waagent[1901]: 2025-02-13T15:32:44.947474Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.1' Feb 13 15:32:44.956704 waagent[1901]: 2025-02-13T15:32:44.956620Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:32:44.962031 waagent[1901]: 2025-02-13T15:32:44.961971Z INFO Daemon Daemon Activate resource disk Feb 13 15:32:44.968179 waagent[1901]: 2025-02-13T15:32:44.968109Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:32:44.982090 waagent[1901]: 2025-02-13T15:32:44.982006Z INFO Daemon Daemon Found device: None Feb 13 15:32:44.986890 waagent[1901]: 2025-02-13T15:32:44.986824Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:32:44.995397 waagent[1901]: 2025-02-13T15:32:44.995331Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:32:45.007137 waagent[1901]: 2025-02-13T15:32:45.007036Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:32:45.013096 waagent[1901]: 2025-02-13T15:32:45.013031Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:32:45.025929 waagent[1901]: 2025-02-13T15:32:45.025810Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:32:45.041935 waagent[1901]: 2025-02-13T15:32:45.041845Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:32:45.051826 waagent[1901]: 2025-02-13T15:32:45.051744Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:32:45.057110 waagent[1901]: 2025-02-13T15:32:45.057043Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:32:45.076074 waagent[1901]: 2025-02-13T15:32:45.074761Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:32:45.121558 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:32:45.124398 waagent[1901]: 2025-02-13T15:32:45.124234Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:32:45.129830 waagent[1901]: 2025-02-13T15:32:45.129749Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:32:45.136194 waagent[1901]: 2025-02-13T15:32:45.136123Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:32:45.143109 waagent[1901]: 2025-02-13T15:32:45.143036Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:32:45.148694 waagent[1901]: 2025-02-13T15:32:45.148628Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:32:45.154252 waagent[1901]: 2025-02-13T15:32:45.154185Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:32:45.226184 waagent[1901]: 2025-02-13T15:32:45.226057Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:32:45.232890 waagent[1901]: 2025-02-13T15:32:45.232837Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:32:45.238425 waagent[1901]: 2025-02-13T15:32:45.238358Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:32:46.050783 waagent[1901]: 2025-02-13T15:32:46.050663Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:32:46.058288 waagent[1901]: 2025-02-13T15:32:46.057875Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:32:46.074349 waagent[1901]: 2025-02-13T15:32:46.074254Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:32:46.121471 waagent[1901]: 2025-02-13T15:32:46.121421Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:32:46.127572 waagent[1901]: 2025-02-13T15:32:46.127518Z INFO Daemon Feb 13 15:32:46.130467 waagent[1901]: 2025-02-13T15:32:46.130403Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9154ad26-e153-47fa-9248-b61e3768d9ab eTag: 13969390169361743863 source: Fabric] Feb 13 15:32:46.142108 waagent[1901]: 2025-02-13T15:32:46.142051Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:32:46.149364 waagent[1901]: 2025-02-13T15:32:46.149307Z INFO Daemon Feb 13 15:32:46.152195 waagent[1901]: 2025-02-13T15:32:46.152132Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:32:46.163251 waagent[1901]: 2025-02-13T15:32:46.163205Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:32:46.264798 waagent[1901]: 2025-02-13T15:32:46.264685Z INFO Daemon Downloaded certificate {'thumbprint': 'F9515E45F13CAD1B7479DBD91010DC6C08ECE7A6', 'hasPrivateKey': True} Feb 13 15:32:46.274937 waagent[1901]: 2025-02-13T15:32:46.274880Z INFO Daemon Downloaded certificate {'thumbprint': 'F73F6058D08E43125699A5F9ACB729549BB3768A', 'hasPrivateKey': False} Feb 13 15:32:46.285133 waagent[1901]: 2025-02-13T15:32:46.285069Z INFO Daemon Fetch goal state completed Feb 13 15:32:46.297116 waagent[1901]: 2025-02-13T15:32:46.297053Z INFO Daemon Daemon Starting provisioning Feb 13 15:32:46.302225 waagent[1901]: 2025-02-13T15:32:46.302110Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:32:46.307141 waagent[1901]: 2025-02-13T15:32:46.307080Z INFO Daemon Daemon Set hostname [ci-4230.0.1-a-cf53dd3440] Feb 13 15:32:46.341088 waagent[1901]: 2025-02-13T15:32:46.340993Z INFO Daemon Daemon Publish hostname [ci-4230.0.1-a-cf53dd3440] Feb 13 15:32:46.348501 waagent[1901]: 2025-02-13T15:32:46.348407Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:32:46.355131 waagent[1901]: 2025-02-13T15:32:46.355059Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:32:46.368830 systemd-networkd[1348]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:32:46.369450 systemd-networkd[1348]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:32:46.369493 systemd-networkd[1348]: eth0: DHCP lease lost Feb 13 15:32:46.372329 waagent[1901]: 2025-02-13T15:32:46.370004Z INFO Daemon Daemon Create user account if not exists Feb 13 15:32:46.376226 waagent[1901]: 2025-02-13T15:32:46.376154Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:32:46.382500 waagent[1901]: 2025-02-13T15:32:46.382414Z INFO Daemon Daemon Configure sudoer Feb 13 15:32:46.387804 waagent[1901]: 2025-02-13T15:32:46.387717Z INFO Daemon Daemon Configure sshd Feb 13 15:32:46.392670 waagent[1901]: 2025-02-13T15:32:46.392594Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:32:46.405920 waagent[1901]: 2025-02-13T15:32:46.405777Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:32:46.422419 systemd-networkd[1348]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 13 15:32:46.507251 waagent[1901]: 2025-02-13T15:32:46.507132Z INFO Daemon Daemon Decode custom data Feb 13 15:32:46.512082 waagent[1901]: 2025-02-13T15:32:46.512010Z INFO Daemon Daemon Save custom data Feb 13 15:32:47.576341 waagent[1901]: 2025-02-13T15:32:47.576234Z INFO Daemon Daemon Provisioning complete Feb 13 15:32:47.593430 waagent[1901]: 2025-02-13T15:32:47.593370Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:32:47.600002 waagent[1901]: 2025-02-13T15:32:47.599925Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:32:47.609521 waagent[1901]: 2025-02-13T15:32:47.609437Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:32:47.763323 waagent[1965]: 2025-02-13T15:32:47.763197Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:32:47.763639 waagent[1965]: 2025-02-13T15:32:47.763423Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.1 Feb 13 15:32:47.763639 waagent[1965]: 2025-02-13T15:32:47.763482Z INFO ExtHandler ExtHandler Python: 3.11.11 Feb 13 15:32:47.860932 waagent[1965]: 2025-02-13T15:32:47.860765Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:32:47.861084 waagent[1965]: 2025-02-13T15:32:47.861038Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:32:47.861150 waagent[1965]: 2025-02-13T15:32:47.861118Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:32:47.870186 waagent[1965]: 2025-02-13T15:32:47.870092Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:32:47.876717 waagent[1965]: 2025-02-13T15:32:47.876665Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:32:47.877401 waagent[1965]: 2025-02-13T15:32:47.877260Z INFO ExtHandler Feb 13 15:32:47.877497 waagent[1965]: 2025-02-13T15:32:47.877461Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 08f2bbb5-2f20-4a67-9006-7a22925262b1 eTag: 13969390169361743863 source: Fabric] Feb 13 15:32:47.877850 waagent[1965]: 2025-02-13T15:32:47.877805Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:32:47.878528 waagent[1965]: 2025-02-13T15:32:47.878474Z INFO ExtHandler Feb 13 15:32:47.878608 waagent[1965]: 2025-02-13T15:32:47.878573Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:32:47.882770 waagent[1965]: 2025-02-13T15:32:47.882723Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:32:47.973734 waagent[1965]: 2025-02-13T15:32:47.973620Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F9515E45F13CAD1B7479DBD91010DC6C08ECE7A6', 'hasPrivateKey': True} Feb 13 15:32:47.974201 waagent[1965]: 2025-02-13T15:32:47.974149Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F73F6058D08E43125699A5F9ACB729549BB3768A', 'hasPrivateKey': False} Feb 13 15:32:47.974693 waagent[1965]: 2025-02-13T15:32:47.974641Z INFO ExtHandler Fetch goal state completed Feb 13 15:32:47.989325 waagent[1965]: 2025-02-13T15:32:47.989223Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1965 Feb 13 15:32:47.989570 waagent[1965]: 2025-02-13T15:32:47.989512Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:32:47.991441 waagent[1965]: 2025-02-13T15:32:47.991381Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:32:47.991863 waagent[1965]: 2025-02-13T15:32:47.991820Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:32:48.037973 waagent[1965]: 2025-02-13T15:32:48.037923Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:32:48.038183 waagent[1965]: 2025-02-13T15:32:48.038141Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:32:48.044791 waagent[1965]: 2025-02-13T15:32:48.044199Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:32:48.051252 systemd[1]: Reload requested from client PID 1980 ('systemctl') (unit waagent.service)... Feb 13 15:32:48.051287 systemd[1]: Reloading... Feb 13 15:32:48.171480 zram_generator::config[2025]: No configuration found. Feb 13 15:32:48.272460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:48.392026 systemd[1]: Reloading finished in 340 ms. Feb 13 15:32:48.420300 waagent[1965]: 2025-02-13T15:32:48.414489Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:32:48.421569 systemd[1]: Reload requested from client PID 2073 ('systemctl') (unit waagent.service)... Feb 13 15:32:48.421689 systemd[1]: Reloading... Feb 13 15:32:48.536298 zram_generator::config[2127]: No configuration found. Feb 13 15:32:48.640125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:48.761325 systemd[1]: Reloading finished in 339 ms. Feb 13 15:32:48.777168 waagent[1965]: 2025-02-13T15:32:48.777035Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:32:48.777562 waagent[1965]: 2025-02-13T15:32:48.777304Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:32:50.173749 waagent[1965]: 2025-02-13T15:32:50.173646Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:32:50.176550 waagent[1965]: 2025-02-13T15:32:50.176471Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:32:50.177587 waagent[1965]: 2025-02-13T15:32:50.177480Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:32:50.178030 waagent[1965]: 2025-02-13T15:32:50.177909Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:32:50.178408 waagent[1965]: 2025-02-13T15:32:50.178351Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:32:50.179438 waagent[1965]: 2025-02-13T15:32:50.178504Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:32:50.179438 waagent[1965]: 2025-02-13T15:32:50.178600Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:32:50.179438 waagent[1965]: 2025-02-13T15:32:50.178750Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:32:50.179438 waagent[1965]: 2025-02-13T15:32:50.178817Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:32:50.179438 waagent[1965]: 2025-02-13T15:32:50.178864Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:32:50.179752 waagent[1965]: 2025-02-13T15:32:50.179703Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:32:50.180074 waagent[1965]: 2025-02-13T15:32:50.180025Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:32:50.180321 waagent[1965]: 2025-02-13T15:32:50.180232Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:32:50.180709 waagent[1965]: 2025-02-13T15:32:50.180658Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:32:50.180709 waagent[1965]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:32:50.180709 waagent[1965]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:32:50.180709 waagent[1965]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:32:50.180709 waagent[1965]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:32:50.180709 waagent[1965]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:32:50.180709 waagent[1965]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:32:50.181420 waagent[1965]: 2025-02-13T15:32:50.181338Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:32:50.181778 waagent[1965]: 2025-02-13T15:32:50.181731Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:32:50.182875 waagent[1965]: 2025-02-13T15:32:50.181685Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:32:50.182875 waagent[1965]: 2025-02-13T15:32:50.182444Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:32:50.190383 waagent[1965]: 2025-02-13T15:32:50.190030Z INFO ExtHandler ExtHandler Feb 13 15:32:50.190383 waagent[1965]: 2025-02-13T15:32:50.190152Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a8a4ff8c-bbf6-4a7f-aa09-ad7119545c4b correlation 938fc81f-a448-4562-8134-9af9ef019679 created: 2025-02-13T15:31:10.706383Z] Feb 13 15:32:50.190612 waagent[1965]: 2025-02-13T15:32:50.190561Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:32:50.191696 waagent[1965]: 2025-02-13T15:32:50.191643Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:32:50.224125 waagent[1965]: 2025-02-13T15:32:50.224052Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F72724DA-7C8E-4CCE-A61E-F76B4FC3F3B1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:32:50.294848 waagent[1965]: 2025-02-13T15:32:50.293789Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:32:50.294848 waagent[1965]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.294848 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.294848 waagent[1965]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.294848 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.294848 waagent[1965]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.294848 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.294848 waagent[1965]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:32:50.294848 waagent[1965]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:32:50.294848 waagent[1965]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:32:50.297413 waagent[1965]: 2025-02-13T15:32:50.297329Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:32:50.297413 waagent[1965]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.297413 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.297413 waagent[1965]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.297413 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.297413 waagent[1965]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:32:50.297413 waagent[1965]: pkts bytes target prot opt in out source destination Feb 13 15:32:50.297413 waagent[1965]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:32:50.297413 waagent[1965]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:32:50.297413 waagent[1965]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:32:50.297992 waagent[1965]: 2025-02-13T15:32:50.297958Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:32:50.329082 waagent[1965]: 2025-02-13T15:32:50.329007Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:32:50.329082 waagent[1965]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:32:50.329082 waagent[1965]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:32:50.329082 waagent[1965]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:c3:92 brd ff:ff:ff:ff:ff:ff Feb 13 15:32:50.329082 waagent[1965]: 3: enP11151s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b5:c3:92 brd ff:ff:ff:ff:ff:ff\ altname enP11151p0s2 Feb 13 15:32:50.329082 waagent[1965]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:32:50.329082 waagent[1965]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:32:50.329082 waagent[1965]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:32:50.329082 waagent[1965]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:32:50.329082 waagent[1965]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:32:50.329082 waagent[1965]: 2: eth0 inet6 fe80::222:48ff:feb5:c392/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:32:50.329082 waagent[1965]: 3: enP11151s1 inet6 fe80::222:48ff:feb5:c392/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:32:50.473953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:32:50.481477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:50.595179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:50.600221 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:32:50.665744 kubelet[2207]: E0213 15:32:50.665684 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:32:50.668440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:32:50.668578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:32:50.668968 systemd[1]: kubelet.service: Consumed 139ms CPU time, 102.1M memory peak. Feb 13 15:33:00.724011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:33:00.732525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:01.117274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:01.131568 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:01.171409 kubelet[2223]: E0213 15:33:01.171318 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:01.173611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:01.173764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:01.174244 systemd[1]: kubelet.service: Consumed 130ms CPU time, 99.4M memory peak. Feb 13 15:33:02.554108 chronyd[1719]: Selected source PHC0 Feb 13 15:33:11.223975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:33:11.231527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:11.565198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:11.569939 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:11.606926 kubelet[2238]: E0213 15:33:11.606790 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:11.609121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:11.609303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:11.609747 systemd[1]: kubelet.service: Consumed 135ms CPU time, 102.1M memory peak. Feb 13 15:33:18.235049 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:33:18.242545 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:43426.service - OpenSSH per-connection server daemon (10.200.16.10:43426). Feb 13 15:33:18.911001 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 43426 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:18.912412 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:18.916746 systemd-logind[1726]: New session 3 of user core. Feb 13 15:33:18.925509 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:33:18.957583 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 13 15:33:19.327036 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:37408.service - OpenSSH per-connection server daemon (10.200.16.10:37408). Feb 13 15:33:19.739751 sshd[2251]: Accepted publickey for core from 10.200.16.10 port 37408 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:19.741158 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:19.745616 systemd-logind[1726]: New session 4 of user core. Feb 13 15:33:19.756437 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:33:20.040577 sshd[2253]: Connection closed by 10.200.16.10 port 37408 Feb 13 15:33:20.039994 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:20.043346 systemd-logind[1726]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:33:20.043638 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:37408.service: Deactivated successfully. Feb 13 15:33:20.045519 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:33:20.047127 systemd-logind[1726]: Removed session 4. Feb 13 15:33:20.123164 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:37412.service - OpenSSH per-connection server daemon (10.200.16.10:37412). Feb 13 15:33:20.578136 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 37412 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:20.579526 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:20.583946 systemd-logind[1726]: New session 5 of user core. Feb 13 15:33:20.594651 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:33:20.914019 sshd[2261]: Connection closed by 10.200.16.10 port 37412 Feb 13 15:33:20.914579 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:20.918491 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:37412.service: Deactivated successfully. Feb 13 15:33:20.920213 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:33:20.920972 systemd-logind[1726]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:33:20.922036 systemd-logind[1726]: Removed session 5. Feb 13 15:33:20.995076 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:37426.service - OpenSSH per-connection server daemon (10.200.16.10:37426). Feb 13 15:33:21.441593 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 37426 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:21.442974 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:21.449012 systemd-logind[1726]: New session 6 of user core. Feb 13 15:33:21.455519 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:33:21.694358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:33:21.705648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:21.766305 sshd[2269]: Connection closed by 10.200.16.10 port 37426 Feb 13 15:33:21.766323 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:21.769306 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:37426.service: Deactivated successfully. Feb 13 15:33:21.771201 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:33:21.773257 systemd-logind[1726]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:33:21.774381 systemd-logind[1726]: Removed session 6. Feb 13 15:33:21.862157 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:37436.service - OpenSSH per-connection server daemon (10.200.16.10:37436). Feb 13 15:33:21.929913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:21.934873 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:21.978661 kubelet[2285]: E0213 15:33:21.978575 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:21.980977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:21.981132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:21.981637 systemd[1]: kubelet.service: Consumed 137ms CPU time, 102M memory peak. Feb 13 15:33:22.319007 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 37436 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:22.320397 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:22.325148 systemd-logind[1726]: New session 7 of user core. Feb 13 15:33:22.331512 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:33:22.739150 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:33:22.739447 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:22.800460 sudo[2293]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:22.884378 sshd[2292]: Connection closed by 10.200.16.10 port 37436 Feb 13 15:33:22.883541 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:22.887767 systemd-logind[1726]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:33:22.888451 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:37436.service: Deactivated successfully. Feb 13 15:33:22.890198 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:33:22.892198 systemd-logind[1726]: Removed session 7. Feb 13 15:33:22.963350 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:37450.service - OpenSSH per-connection server daemon (10.200.16.10:37450). Feb 13 15:33:23.384366 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 37450 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:23.385733 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:23.391466 systemd-logind[1726]: New session 8 of user core. Feb 13 15:33:23.397465 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:33:23.623851 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:33:23.624126 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:23.628002 sudo[2303]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:23.633216 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:33:23.633517 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:23.648522 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:33:23.669915 augenrules[2325]: No rules Feb 13 15:33:23.671583 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:33:23.671947 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:33:23.673260 sudo[2302]: pam_unix(sudo:session): session closed for user root Feb 13 15:33:23.738794 sshd[2301]: Connection closed by 10.200.16.10 port 37450 Feb 13 15:33:23.739384 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:23.743579 systemd-logind[1726]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:33:23.744692 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:37450.service: Deactivated successfully. Feb 13 15:33:23.747942 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:33:23.749651 systemd-logind[1726]: Removed session 8. Feb 13 15:33:23.822129 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:37466.service - OpenSSH per-connection server daemon (10.200.16.10:37466). Feb 13 15:33:24.199516 update_engine[1730]: I20250213 15:33:24.198819 1730 update_attempter.cc:509] Updating boot flags... Feb 13 15:33:24.266449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2351) Feb 13 15:33:24.281946 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 37466 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:33:24.281792 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:24.290004 systemd-logind[1726]: New session 9 of user core. Feb 13 15:33:24.303599 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:33:24.536890 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:33:24.537179 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:33:26.697568 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:33:26.697713 (dockerd)[2418]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:33:28.257549 dockerd[2418]: time="2025-02-13T15:33:28.257480871Z" level=info msg="Starting up" Feb 13 15:33:28.676457 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3385576983-merged.mount: Deactivated successfully. Feb 13 15:33:28.752160 dockerd[2418]: time="2025-02-13T15:33:28.751933387Z" level=info msg="Loading containers: start." Feb 13 15:33:29.027302 kernel: Initializing XFRM netlink socket Feb 13 15:33:29.337494 systemd-networkd[1348]: docker0: Link UP Feb 13 15:33:29.383422 dockerd[2418]: time="2025-02-13T15:33:29.382737263Z" level=info msg="Loading containers: done." Feb 13 15:33:29.421763 dockerd[2418]: time="2025-02-13T15:33:29.421715297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:33:29.422102 dockerd[2418]: time="2025-02-13T15:33:29.422083018Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:33:29.422304 dockerd[2418]: time="2025-02-13T15:33:29.422287858Z" level=info msg="Daemon has completed initialization" Feb 13 15:33:29.505981 dockerd[2418]: time="2025-02-13T15:33:29.505901892Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:33:29.506259 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:33:29.672741 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1576813368-merged.mount: Deactivated successfully. Feb 13 15:33:30.153787 containerd[1745]: time="2025-02-13T15:33:30.153735982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 15:33:31.167216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490900688.mount: Deactivated successfully. Feb 13 15:33:32.223860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:33:32.233553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:32.344102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:32.354828 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:32.399805 kubelet[2660]: E0213 15:33:32.399745 2660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:32.402462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:32.402757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:32.403376 systemd[1]: kubelet.service: Consumed 135ms CPU time, 101.2M memory peak. Feb 13 15:33:33.431824 containerd[1745]: time="2025-02-13T15:33:33.431333991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:33.435353 containerd[1745]: time="2025-02-13T15:33:33.434683514Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 13 15:33:33.440914 containerd[1745]: time="2025-02-13T15:33:33.440856239Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:33.447860 containerd[1745]: time="2025-02-13T15:33:33.447792085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:33.449113 containerd[1745]: time="2025-02-13T15:33:33.448873406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 3.295090064s" Feb 13 15:33:33.449113 containerd[1745]: time="2025-02-13T15:33:33.448919566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 15:33:33.449729 containerd[1745]: time="2025-02-13T15:33:33.449686727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 15:33:35.477811 containerd[1745]: time="2025-02-13T15:33:35.477751735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:35.480441 containerd[1745]: time="2025-02-13T15:33:35.480364657Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 13 15:33:35.483400 containerd[1745]: time="2025-02-13T15:33:35.483346340Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:35.494044 containerd[1745]: time="2025-02-13T15:33:35.493980110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:35.494903 containerd[1745]: time="2025-02-13T15:33:35.494762550Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 2.044840663s" Feb 13 15:33:35.494903 containerd[1745]: time="2025-02-13T15:33:35.494793310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 15:33:35.495951 containerd[1745]: time="2025-02-13T15:33:35.495397751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 15:33:37.687309 containerd[1745]: time="2025-02-13T15:33:37.686788814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:37.690210 containerd[1745]: time="2025-02-13T15:33:37.689954857Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 13 15:33:37.694963 containerd[1745]: time="2025-02-13T15:33:37.694910421Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:37.702086 containerd[1745]: time="2025-02-13T15:33:37.702006428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:37.703354 containerd[1745]: time="2025-02-13T15:33:37.703180269Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 2.207742838s" Feb 13 15:33:37.703354 containerd[1745]: time="2025-02-13T15:33:37.703224469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 15:33:37.704053 containerd[1745]: time="2025-02-13T15:33:37.704006790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 15:33:39.121400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928673588.mount: Deactivated successfully. Feb 13 15:33:39.508272 containerd[1745]: time="2025-02-13T15:33:39.508205062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:39.512851 containerd[1745]: time="2025-02-13T15:33:39.512796267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 15:33:39.521708 containerd[1745]: time="2025-02-13T15:33:39.521672475Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:39.535055 containerd[1745]: time="2025-02-13T15:33:39.534983807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:39.535893 containerd[1745]: time="2025-02-13T15:33:39.535559767Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.831401977s" Feb 13 15:33:39.535893 containerd[1745]: time="2025-02-13T15:33:39.535598967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 15:33:39.536179 containerd[1745]: time="2025-02-13T15:33:39.536137008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 15:33:40.317985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867251737.mount: Deactivated successfully. Feb 13 15:33:42.355312 containerd[1745]: time="2025-02-13T15:33:42.354387838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.357691 containerd[1745]: time="2025-02-13T15:33:42.357409441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 13 15:33:42.363021 containerd[1745]: time="2025-02-13T15:33:42.362963846Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.369154 containerd[1745]: time="2025-02-13T15:33:42.369095332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:42.373614 containerd[1745]: time="2025-02-13T15:33:42.372738575Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.836562287s" Feb 13 15:33:42.373614 containerd[1745]: time="2025-02-13T15:33:42.372796735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 15:33:42.374389 containerd[1745]: time="2025-02-13T15:33:42.374351056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:33:42.473828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:33:42.484469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:42.606658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:42.617627 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:42.656924 kubelet[2747]: E0213 15:33:42.656789 2747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:42.659247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:42.659430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:42.659962 systemd[1]: kubelet.service: Consumed 140ms CPU time, 100.2M memory peak. Feb 13 15:33:43.749540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95415748.mount: Deactivated successfully. Feb 13 15:33:43.789298 containerd[1745]: time="2025-02-13T15:33:43.789144389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:43.795221 containerd[1745]: time="2025-02-13T15:33:43.795156115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:33:43.802383 containerd[1745]: time="2025-02-13T15:33:43.802330921Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:43.808854 containerd[1745]: time="2025-02-13T15:33:43.808778607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:43.809720 containerd[1745]: time="2025-02-13T15:33:43.809588288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.435112432s" Feb 13 15:33:43.809720 containerd[1745]: time="2025-02-13T15:33:43.809625088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:33:43.810580 containerd[1745]: time="2025-02-13T15:33:43.810286728Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 15:33:44.597758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155828096.mount: Deactivated successfully. Feb 13 15:33:48.664595 containerd[1745]: time="2025-02-13T15:33:48.663362571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:48.667188 containerd[1745]: time="2025-02-13T15:33:48.667126135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 13 15:33:48.671461 containerd[1745]: time="2025-02-13T15:33:48.671391979Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:48.677557 containerd[1745]: time="2025-02-13T15:33:48.677476704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:48.679331 containerd[1745]: time="2025-02-13T15:33:48.678805545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.868482417s" Feb 13 15:33:48.679331 containerd[1745]: time="2025-02-13T15:33:48.678853985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 15:33:52.723837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:33:52.733783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:52.844450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:52.850409 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:33:52.895700 kubelet[2840]: E0213 15:33:52.895654 2840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:33:52.898124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:33:52.898262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:33:52.899035 systemd[1]: kubelet.service: Consumed 133ms CPU time, 101.7M memory peak. Feb 13 15:33:53.859216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:53.859944 systemd[1]: kubelet.service: Consumed 133ms CPU time, 101.7M memory peak. Feb 13 15:33:53.870548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:53.901466 systemd[1]: Reload requested from client PID 2855 ('systemctl') (unit session-9.scope)... Feb 13 15:33:53.901485 systemd[1]: Reloading... Feb 13 15:33:54.032297 zram_generator::config[2902]: No configuration found. Feb 13 15:33:54.151414 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:33:54.270823 systemd[1]: Reloading finished in 368 ms. Feb 13 15:33:54.315312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:54.327776 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:54.328361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:54.329786 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:33:54.330033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:54.330104 systemd[1]: kubelet.service: Consumed 93ms CPU time, 90.1M memory peak. Feb 13 15:33:54.333491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:33:54.456378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:33:54.461110 (kubelet)[2972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:33:54.503428 kubelet[2972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:54.503428 kubelet[2972]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:33:54.503428 kubelet[2972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:33:54.503804 kubelet[2972]: I0213 15:33:54.503494 2972 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:33:55.023419 kubelet[2972]: I0213 15:33:55.023374 2972 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:33:55.023419 kubelet[2972]: I0213 15:33:55.023409 2972 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:33:55.023927 kubelet[2972]: I0213 15:33:55.023902 2972 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:33:55.045797 kubelet[2972]: E0213 15:33:55.045745 2972 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:55.048290 kubelet[2972]: I0213 15:33:55.048196 2972 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:33:55.055063 kubelet[2972]: E0213 15:33:55.054854 2972 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:33:55.055063 kubelet[2972]: I0213 15:33:55.054905 2972 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:33:55.058327 kubelet[2972]: I0213 15:33:55.058300 2972 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:33:55.059148 kubelet[2972]: I0213 15:33:55.059099 2972 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:33:55.059381 kubelet[2972]: I0213 15:33:55.059152 2972 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-cf53dd3440","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:33:55.059501 kubelet[2972]: I0213 15:33:55.059393 2972 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:33:55.059501 kubelet[2972]: I0213 15:33:55.059402 2972 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:33:55.059577 kubelet[2972]: I0213 15:33:55.059552 2972 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:55.062248 kubelet[2972]: I0213 15:33:55.062217 2972 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:33:55.062361 kubelet[2972]: I0213 15:33:55.062342 2972 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:33:55.062391 kubelet[2972]: I0213 15:33:55.062372 2972 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:33:55.062391 kubelet[2972]: I0213 15:33:55.062384 2972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:33:55.068019 kubelet[2972]: I0213 15:33:55.067645 2972 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:33:55.068239 kubelet[2972]: I0213 15:33:55.068214 2972 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:33:55.069319 kubelet[2972]: W0213 15:33:55.068308 2972 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:33:55.069319 kubelet[2972]: I0213 15:33:55.068936 2972 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:33:55.069319 kubelet[2972]: I0213 15:33:55.068966 2972 server.go:1287] "Started kubelet" Feb 13 15:33:55.069319 kubelet[2972]: W0213 15:33:55.069106 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-cf53dd3440&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:55.069319 kubelet[2972]: E0213 15:33:55.069152 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-cf53dd3440&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:55.072994 kubelet[2972]: I0213 15:33:55.072934 2972 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:33:55.074366 kubelet[2972]: I0213 15:33:55.074174 2972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:33:55.076543 kubelet[2972]: I0213 15:33:55.076511 2972 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:33:55.079284 kubelet[2972]: I0213 15:33:55.079205 2972 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:33:55.079678 kubelet[2972]: I0213 15:33:55.079660 2972 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:33:55.081190 kubelet[2972]: E0213 15:33:55.081051 2972 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-cf53dd3440.1823ce69d6fd2b20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-cf53dd3440,UID:ci-4230.0.1-a-cf53dd3440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-cf53dd3440,},FirstTimestamp:2025-02-13 15:33:55.068947232 +0000 UTC m=+0.604386340,LastTimestamp:2025-02-13 15:33:55.068947232 +0000 UTC m=+0.604386340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-cf53dd3440,}" Feb 13 15:33:55.083533 kubelet[2972]: W0213 15:33:55.083481 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:55.085372 kubelet[2972]: E0213 15:33:55.085326 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:55.085372 kubelet[2972]: I0213 15:33:55.084773 2972 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:33:55.085596 kubelet[2972]: I0213 15:33:55.083725 2972 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:33:55.085636 kubelet[2972]: I0213 15:33:55.084785 2972 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:33:55.085668 kubelet[2972]: I0213 15:33:55.085651 2972 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:33:55.085668 kubelet[2972]: E0213 15:33:55.084929 2972 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" Feb 13 15:33:55.086858 kubelet[2972]: E0213 15:33:55.086364 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-cf53dd3440?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Feb 13 15:33:55.086858 kubelet[2972]: W0213 15:33:55.086777 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:55.086858 kubelet[2972]: E0213 15:33:55.086835 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:55.088088 kubelet[2972]: I0213 15:33:55.088002 2972 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:33:55.088170 kubelet[2972]: I0213 15:33:55.088107 2972 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:33:55.089257 kubelet[2972]: E0213 15:33:55.089226 2972 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:33:55.089832 kubelet[2972]: I0213 15:33:55.089801 2972 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:33:55.098981 kubelet[2972]: I0213 15:33:55.098807 2972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:33:55.099965 kubelet[2972]: I0213 15:33:55.099942 2972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:33:55.100402 kubelet[2972]: I0213 15:33:55.100037 2972 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:33:55.100402 kubelet[2972]: I0213 15:33:55.100068 2972 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:33:55.100402 kubelet[2972]: I0213 15:33:55.100075 2972 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:33:55.100402 kubelet[2972]: E0213 15:33:55.100119 2972 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:33:55.107072 kubelet[2972]: W0213 15:33:55.107023 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:55.107255 kubelet[2972]: E0213 15:33:55.107081 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:55.113653 kubelet[2972]: I0213 15:33:55.113368 2972 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:33:55.113653 kubelet[2972]: I0213 15:33:55.113388 2972 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:33:55.113653 kubelet[2972]: I0213 15:33:55.113410 2972 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:33:55.119618 kubelet[2972]: I0213 15:33:55.119594 2972 policy_none.go:49] "None policy: Start" Feb 13 15:33:55.119763 kubelet[2972]: I0213 15:33:55.119753 2972 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:33:55.120069 kubelet[2972]: I0213 15:33:55.119822 2972 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:33:55.132362 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:33:55.147715 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:33:55.151247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:33:55.160326 kubelet[2972]: I0213 15:33:55.160290 2972 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:33:55.160534 kubelet[2972]: I0213 15:33:55.160511 2972 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:33:55.160968 kubelet[2972]: I0213 15:33:55.160529 2972 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:33:55.160968 kubelet[2972]: I0213 15:33:55.160897 2972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:33:55.162638 kubelet[2972]: E0213 15:33:55.162512 2972 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:33:55.162734 kubelet[2972]: E0213 15:33:55.162672 2972 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.1-a-cf53dd3440\" not found" Feb 13 15:33:55.212487 systemd[1]: Created slice kubepods-burstable-pode047278cfc6f97d5ad4de9940897c98f.slice - libcontainer container kubepods-burstable-pode047278cfc6f97d5ad4de9940897c98f.slice. Feb 13 15:33:55.230912 kubelet[2972]: E0213 15:33:55.230852 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.235051 systemd[1]: Created slice kubepods-burstable-podc88be74d5b2a9346761621102c71d463.slice - libcontainer container kubepods-burstable-podc88be74d5b2a9346761621102c71d463.slice. Feb 13 15:33:55.238311 kubelet[2972]: E0213 15:33:55.238233 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.239967 systemd[1]: Created slice kubepods-burstable-pod715297f21ed8e2b16f1ee19aba25d303.slice - libcontainer container kubepods-burstable-pod715297f21ed8e2b16f1ee19aba25d303.slice. Feb 13 15:33:55.241829 kubelet[2972]: E0213 15:33:55.241798 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.262908 kubelet[2972]: I0213 15:33:55.262414 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.262908 kubelet[2972]: E0213 15:33:55.262877 2972 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.286826 kubelet[2972]: I0213 15:33:55.286711 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.287256 kubelet[2972]: I0213 15:33:55.286967 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.287256 kubelet[2972]: I0213 15:33:55.286993 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.287256 kubelet[2972]: I0213 15:33:55.287032 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.287256 kubelet[2972]: E0213 15:33:55.286758 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-cf53dd3440?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Feb 13 15:33:55.387556 kubelet[2972]: I0213 15:33:55.387321 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.387556 kubelet[2972]: I0213 15:33:55.387374 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.387556 kubelet[2972]: I0213 15:33:55.387393 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/715297f21ed8e2b16f1ee19aba25d303-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-cf53dd3440\" (UID: \"715297f21ed8e2b16f1ee19aba25d303\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.387556 kubelet[2972]: I0213 15:33:55.387442 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.387556 kubelet[2972]: I0213 15:33:55.387460 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.465290 kubelet[2972]: I0213 15:33:55.465207 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.465807 kubelet[2972]: E0213 15:33:55.465775 2972 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.532534 containerd[1745]: time="2025-02-13T15:33:55.532420974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-cf53dd3440,Uid:e047278cfc6f97d5ad4de9940897c98f,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:55.540242 containerd[1745]: time="2025-02-13T15:33:55.540135140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-cf53dd3440,Uid:c88be74d5b2a9346761621102c71d463,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:55.543165 containerd[1745]: time="2025-02-13T15:33:55.543097103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-cf53dd3440,Uid:715297f21ed8e2b16f1ee19aba25d303,Namespace:kube-system,Attempt:0,}" Feb 13 15:33:55.687863 kubelet[2972]: E0213 15:33:55.687804 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-cf53dd3440?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Feb 13 15:33:55.867821 kubelet[2972]: I0213 15:33:55.867359 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:55.867821 kubelet[2972]: E0213 15:33:55.867728 2972 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:56.182622 kubelet[2972]: W0213 15:33:56.182500 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:56.182622 kubelet[2972]: E0213 15:33:56.182546 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:56.235591 kubelet[2972]: W0213 15:33:56.235540 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-cf53dd3440&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:56.235678 kubelet[2972]: E0213 15:33:56.235608 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.1-a-cf53dd3440&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:56.275872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820491507.mount: Deactivated successfully. Feb 13 15:33:56.302296 containerd[1745]: time="2025-02-13T15:33:56.301799248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:56.316237 containerd[1745]: time="2025-02-13T15:33:56.316176380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:33:56.325068 containerd[1745]: time="2025-02-13T15:33:56.325015747Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:56.336701 containerd[1745]: time="2025-02-13T15:33:56.335940036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:56.339104 containerd[1745]: time="2025-02-13T15:33:56.339029679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:56.341702 containerd[1745]: time="2025-02-13T15:33:56.341561521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:56.342761 containerd[1745]: time="2025-02-13T15:33:56.342718202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 810.217748ms" Feb 13 15:33:56.347605 containerd[1745]: time="2025-02-13T15:33:56.346840285Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:33:56.360291 containerd[1745]: time="2025-02-13T15:33:56.359550536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:33:56.362352 containerd[1745]: time="2025-02-13T15:33:56.362309578Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 818.946995ms" Feb 13 15:33:56.364419 containerd[1745]: time="2025-02-13T15:33:56.364257620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 824.0382ms" Feb 13 15:33:56.454799 kubelet[2972]: W0213 15:33:56.454367 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:56.454799 kubelet[2972]: E0213 15:33:56.454413 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:56.488766 kubelet[2972]: E0213 15:33:56.488720 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-cf53dd3440?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Feb 13 15:33:56.647693 kubelet[2972]: W0213 15:33:56.647603 2972 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Feb 13 15:33:56.647693 kubelet[2972]: E0213 15:33:56.647657 2972 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:56.669735 kubelet[2972]: I0213 15:33:56.669444 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:56.669888 kubelet[2972]: E0213 15:33:56.669789 2972 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:56.787107 kubelet[2972]: E0213 15:33:56.787001 2972 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.1-a-cf53dd3440.1823ce69d6fd2b20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.1-a-cf53dd3440,UID:ci-4230.0.1-a-cf53dd3440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.1-a-cf53dd3440,},FirstTimestamp:2025-02-13 15:33:55.068947232 +0000 UTC m=+0.604386340,LastTimestamp:2025-02-13 15:33:55.068947232 +0000 UTC m=+0.604386340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.1-a-cf53dd3440,}" Feb 13 15:33:57.064922 kubelet[2972]: E0213 15:33:57.064570 2972 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:33:57.971407 containerd[1745]: time="2025-02-13T15:33:57.971128825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:57.971407 containerd[1745]: time="2025-02-13T15:33:57.971197265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:57.971407 containerd[1745]: time="2025-02-13T15:33:57.971212785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:57.971407 containerd[1745]: time="2025-02-13T15:33:57.971304305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:57.975822 containerd[1745]: time="2025-02-13T15:33:57.975460548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:57.975822 containerd[1745]: time="2025-02-13T15:33:57.975520268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:57.975822 containerd[1745]: time="2025-02-13T15:33:57.975531268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:57.975822 containerd[1745]: time="2025-02-13T15:33:57.975614109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:57.978139 containerd[1745]: time="2025-02-13T15:33:57.977934150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:57.978473 containerd[1745]: time="2025-02-13T15:33:57.978173351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:57.978473 containerd[1745]: time="2025-02-13T15:33:57.978196111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:57.988527 containerd[1745]: time="2025-02-13T15:33:57.982066634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:58.019569 systemd[1]: Started cri-containerd-4bbccf2fb6311d45add27b9d04572d5bf781c3d6806c7152c301e5372baceb52.scope - libcontainer container 4bbccf2fb6311d45add27b9d04572d5bf781c3d6806c7152c301e5372baceb52. Feb 13 15:33:58.021360 systemd[1]: Started cri-containerd-6dd1ff34c9c58d047c8209bdb143782cfc57185155c990a8c2f1cfe734fa62ce.scope - libcontainer container 6dd1ff34c9c58d047c8209bdb143782cfc57185155c990a8c2f1cfe734fa62ce. Feb 13 15:33:58.023798 systemd[1]: Started cri-containerd-8db62d0ee083736d9f028ae3a85ee84b1f7b5aa61f878f473905441e4c077bdf.scope - libcontainer container 8db62d0ee083736d9f028ae3a85ee84b1f7b5aa61f878f473905441e4c077bdf. Feb 13 15:33:58.074089 containerd[1745]: time="2025-02-13T15:33:58.073366189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.1-a-cf53dd3440,Uid:715297f21ed8e2b16f1ee19aba25d303,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dd1ff34c9c58d047c8209bdb143782cfc57185155c990a8c2f1cfe734fa62ce\"" Feb 13 15:33:58.083110 containerd[1745]: time="2025-02-13T15:33:58.082418077Z" level=info msg="CreateContainer within sandbox \"6dd1ff34c9c58d047c8209bdb143782cfc57185155c990a8c2f1cfe734fa62ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:33:58.087416 containerd[1745]: time="2025-02-13T15:33:58.087365881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.1-a-cf53dd3440,Uid:c88be74d5b2a9346761621102c71d463,Namespace:kube-system,Attempt:0,} returns sandbox id \"8db62d0ee083736d9f028ae3a85ee84b1f7b5aa61f878f473905441e4c077bdf\"" Feb 13 15:33:58.089620 kubelet[2972]: E0213 15:33:58.089578 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.1-a-cf53dd3440?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="3.2s" Feb 13 15:33:58.097846 containerd[1745]: time="2025-02-13T15:33:58.097811609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.1-a-cf53dd3440,Uid:e047278cfc6f97d5ad4de9940897c98f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bbccf2fb6311d45add27b9d04572d5bf781c3d6806c7152c301e5372baceb52\"" Feb 13 15:33:58.098293 containerd[1745]: time="2025-02-13T15:33:58.098063449Z" level=info msg="CreateContainer within sandbox \"8db62d0ee083736d9f028ae3a85ee84b1f7b5aa61f878f473905441e4c077bdf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:33:58.101723 containerd[1745]: time="2025-02-13T15:33:58.101655212Z" level=info msg="CreateContainer within sandbox \"4bbccf2fb6311d45add27b9d04572d5bf781c3d6806c7152c301e5372baceb52\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:33:58.189654 containerd[1745]: time="2025-02-13T15:33:58.189484645Z" level=info msg="CreateContainer within sandbox \"6dd1ff34c9c58d047c8209bdb143782cfc57185155c990a8c2f1cfe734fa62ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34d89106a4a217ccaed55053695804f4414ee529e52fada72196fe7fb2715e8c\"" Feb 13 15:33:58.190582 containerd[1745]: time="2025-02-13T15:33:58.190109805Z" level=info msg="StartContainer for \"34d89106a4a217ccaed55053695804f4414ee529e52fada72196fe7fb2715e8c\"" Feb 13 15:33:58.196640 containerd[1745]: time="2025-02-13T15:33:58.196497451Z" level=info msg="CreateContainer within sandbox \"8db62d0ee083736d9f028ae3a85ee84b1f7b5aa61f878f473905441e4c077bdf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51daacf007508f06e732a1f398fd92fc6f2653730feabdc4c3c4f9a8f9460bcf\"" Feb 13 15:33:58.199284 containerd[1745]: time="2025-02-13T15:33:58.199194053Z" level=info msg="StartContainer for \"51daacf007508f06e732a1f398fd92fc6f2653730feabdc4c3c4f9a8f9460bcf\"" Feb 13 15:33:58.206675 containerd[1745]: time="2025-02-13T15:33:58.206559819Z" level=info msg="CreateContainer within sandbox \"4bbccf2fb6311d45add27b9d04572d5bf781c3d6806c7152c301e5372baceb52\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00df58865272c7d2f3cc25d3009b9928badf63b7e09c7407ae0920b95a819a22\"" Feb 13 15:33:58.208043 containerd[1745]: time="2025-02-13T15:33:58.207785820Z" level=info msg="StartContainer for \"00df58865272c7d2f3cc25d3009b9928badf63b7e09c7407ae0920b95a819a22\"" Feb 13 15:33:58.219874 systemd[1]: Started cri-containerd-34d89106a4a217ccaed55053695804f4414ee529e52fada72196fe7fb2715e8c.scope - libcontainer container 34d89106a4a217ccaed55053695804f4414ee529e52fada72196fe7fb2715e8c. Feb 13 15:33:58.238501 systemd[1]: Started cri-containerd-51daacf007508f06e732a1f398fd92fc6f2653730feabdc4c3c4f9a8f9460bcf.scope - libcontainer container 51daacf007508f06e732a1f398fd92fc6f2653730feabdc4c3c4f9a8f9460bcf. Feb 13 15:33:58.251047 systemd[1]: Started cri-containerd-00df58865272c7d2f3cc25d3009b9928badf63b7e09c7407ae0920b95a819a22.scope - libcontainer container 00df58865272c7d2f3cc25d3009b9928badf63b7e09c7407ae0920b95a819a22. Feb 13 15:33:58.276345 kubelet[2972]: I0213 15:33:58.276257 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:58.278214 kubelet[2972]: E0213 15:33:58.278077 2972 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:58.295296 containerd[1745]: time="2025-02-13T15:33:58.293717611Z" level=info msg="StartContainer for \"34d89106a4a217ccaed55053695804f4414ee529e52fada72196fe7fb2715e8c\" returns successfully" Feb 13 15:33:58.312380 containerd[1745]: time="2025-02-13T15:33:58.311797786Z" level=info msg="StartContainer for \"51daacf007508f06e732a1f398fd92fc6f2653730feabdc4c3c4f9a8f9460bcf\" returns successfully" Feb 13 15:33:58.317046 containerd[1745]: time="2025-02-13T15:33:58.316984310Z" level=info msg="StartContainer for \"00df58865272c7d2f3cc25d3009b9928badf63b7e09c7407ae0920b95a819a22\" returns successfully" Feb 13 15:33:59.121529 kubelet[2972]: E0213 15:33:59.119679 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:59.125647 kubelet[2972]: E0213 15:33:59.125474 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:33:59.125647 kubelet[2972]: E0213 15:33:59.125575 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:00.129646 kubelet[2972]: E0213 15:34:00.129615 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:00.130989 kubelet[2972]: E0213 15:34:00.130592 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:00.130989 kubelet[2972]: E0213 15:34:00.130843 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.075383 kubelet[2972]: I0213 15:34:01.075334 2972 apiserver.go:52] "Watching apiserver" Feb 13 15:34:01.086445 kubelet[2972]: I0213 15:34:01.086400 2972 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:34:01.130730 kubelet[2972]: E0213 15:34:01.130696 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.131062 kubelet[2972]: E0213 15:34:01.131025 2972 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.215975 kubelet[2972]: E0213 15:34:01.215935 2972 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.0.1-a-cf53dd3440" not found Feb 13 15:34:01.301952 kubelet[2972]: E0213 15:34:01.301873 2972 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.1-a-cf53dd3440\" not found" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.480361 kubelet[2972]: I0213 15:34:01.480150 2972 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.487766 kubelet[2972]: I0213 15:34:01.487484 2972 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.586164 kubelet[2972]: I0213 15:34:01.585868 2972 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.601379 kubelet[2972]: W0213 15:34:01.601343 2972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:01.601525 kubelet[2972]: I0213 15:34:01.601435 2972 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.609749 kubelet[2972]: W0213 15:34:01.609427 2972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:01.609749 kubelet[2972]: I0213 15:34:01.609533 2972 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:01.617998 kubelet[2972]: W0213 15:34:01.617958 2972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:02.907391 systemd[1]: Reload requested from client PID 3243 ('systemctl') (unit session-9.scope)... Feb 13 15:34:02.907716 systemd[1]: Reloading... Feb 13 15:34:03.017505 zram_generator::config[3288]: No configuration found. Feb 13 15:34:03.144471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:03.278651 systemd[1]: Reloading finished in 370 ms. Feb 13 15:34:03.304222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:03.311531 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:34:03.311781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:03.311850 systemd[1]: kubelet.service: Consumed 982ms CPU time, 121.4M memory peak. Feb 13 15:34:03.319647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:03.433371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:03.443679 (kubelet)[3354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:34:03.512689 kubelet[3354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:34:03.512689 kubelet[3354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:34:03.512689 kubelet[3354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:34:03.513091 kubelet[3354]: I0213 15:34:03.512749 3354 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:34:03.521894 kubelet[3354]: I0213 15:34:03.521849 3354 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:34:03.521894 kubelet[3354]: I0213 15:34:03.521884 3354 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:34:03.522230 kubelet[3354]: I0213 15:34:03.522208 3354 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:34:03.523946 kubelet[3354]: I0213 15:34:03.523899 3354 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:34:03.526792 kubelet[3354]: I0213 15:34:03.526631 3354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:34:03.530569 kubelet[3354]: E0213 15:34:03.530434 3354 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:34:03.530569 kubelet[3354]: I0213 15:34:03.530479 3354 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:34:03.536241 kubelet[3354]: I0213 15:34:03.535994 3354 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:34:03.536241 kubelet[3354]: I0213 15:34:03.536227 3354 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:34:03.536468 kubelet[3354]: I0213 15:34:03.536253 3354 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.1-a-cf53dd3440","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:34:03.536563 kubelet[3354]: I0213 15:34:03.536476 3354 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:34:03.536563 kubelet[3354]: I0213 15:34:03.536486 3354 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:34:03.536563 kubelet[3354]: I0213 15:34:03.536535 3354 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:34:03.536692 kubelet[3354]: I0213 15:34:03.536669 3354 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:34:03.536726 kubelet[3354]: I0213 15:34:03.536696 3354 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:34:03.537210 kubelet[3354]: I0213 15:34:03.537188 3354 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:34:03.537259 kubelet[3354]: I0213 15:34:03.537213 3354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:34:03.544295 kubelet[3354]: I0213 15:34:03.543865 3354 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:34:03.544427 kubelet[3354]: I0213 15:34:03.544411 3354 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:34:03.544903 kubelet[3354]: I0213 15:34:03.544882 3354 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:34:03.544945 kubelet[3354]: I0213 15:34:03.544923 3354 server.go:1287] "Started kubelet" Feb 13 15:34:03.549048 kubelet[3354]: I0213 15:34:03.549011 3354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:34:03.559338 kubelet[3354]: I0213 15:34:03.559260 3354 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:34:03.560222 kubelet[3354]: I0213 15:34:03.560189 3354 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:34:03.561133 kubelet[3354]: I0213 15:34:03.561080 3354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:34:03.564417 kubelet[3354]: I0213 15:34:03.564379 3354 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:34:03.564773 kubelet[3354]: I0213 15:34:03.564746 3354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:34:03.567969 kubelet[3354]: I0213 15:34:03.567936 3354 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:34:03.568239 kubelet[3354]: E0213 15:34:03.568211 3354 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.0.1-a-cf53dd3440\" not found" Feb 13 15:34:03.572333 kubelet[3354]: I0213 15:34:03.572259 3354 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:34:03.572476 kubelet[3354]: I0213 15:34:03.572434 3354 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:34:03.575202 kubelet[3354]: I0213 15:34:03.574289 3354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:34:03.576219 kubelet[3354]: I0213 15:34:03.576185 3354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:34:03.576219 kubelet[3354]: I0213 15:34:03.576220 3354 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:34:03.576378 kubelet[3354]: I0213 15:34:03.576239 3354 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:34:03.576378 kubelet[3354]: I0213 15:34:03.576245 3354 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:34:03.576378 kubelet[3354]: E0213 15:34:03.576318 3354 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:34:03.589433 kubelet[3354]: E0213 15:34:03.588291 3354 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:34:03.590214 kubelet[3354]: I0213 15:34:03.589894 3354 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:34:03.590214 kubelet[3354]: I0213 15:34:03.589928 3354 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:34:03.590214 kubelet[3354]: I0213 15:34:03.590074 3354 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:34:03.634771 kubelet[3354]: I0213 15:34:03.634722 3354 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:34:03.634771 kubelet[3354]: I0213 15:34:03.634747 3354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:34:03.634771 kubelet[3354]: I0213 15:34:03.634774 3354 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:34:03.635015 kubelet[3354]: I0213 15:34:03.634971 3354 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:34:03.635015 kubelet[3354]: I0213 15:34:03.634985 3354 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:34:03.635015 kubelet[3354]: I0213 15:34:03.635005 3354 policy_none.go:49] "None policy: Start" Feb 13 15:34:03.635015 kubelet[3354]: I0213 15:34:03.635017 3354 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:34:03.635143 kubelet[3354]: I0213 15:34:03.635028 3354 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:34:03.635143 kubelet[3354]: I0213 15:34:03.635132 3354 state_mem.go:75] "Updated machine memory state" Feb 13 15:34:03.639963 kubelet[3354]: I0213 15:34:03.639925 3354 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:34:03.640748 kubelet[3354]: I0213 15:34:03.640128 3354 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:34:03.640748 kubelet[3354]: I0213 15:34:03.640150 3354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:34:03.640748 kubelet[3354]: I0213 15:34:03.640619 3354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:34:03.643927 kubelet[3354]: E0213 15:34:03.643075 3354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:34:03.677742 kubelet[3354]: I0213 15:34:03.677703 3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.678026 kubelet[3354]: I0213 15:34:03.677703 3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.678933 kubelet[3354]: I0213 15:34:03.678894 3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.694690 kubelet[3354]: W0213 15:34:03.694653 3354 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:03.694906 kubelet[3354]: E0213 15:34:03.694733 3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.696808 kubelet[3354]: W0213 15:34:03.696484 3354 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:03.696808 kubelet[3354]: E0213 15:34:03.696569 3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" already exists" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.696808 kubelet[3354]: W0213 15:34:03.696670 3354 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:03.696808 kubelet[3354]: E0213 15:34:03.696694 3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.0.1-a-cf53dd3440\" already exists" pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.743437 kubelet[3354]: I0213 15:34:03.743403 3354 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.758095 kubelet[3354]: I0213 15:34:03.758015 3354 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.758342 kubelet[3354]: I0213 15:34:03.758114 3354 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773478 kubelet[3354]: I0213 15:34:03.773379 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-ca-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773478 kubelet[3354]: I0213 15:34:03.773418 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773478 kubelet[3354]: I0213 15:34:03.773438 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-k8s-certs\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773478 kubelet[3354]: I0213 15:34:03.773453 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-ca-certs\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773478 kubelet[3354]: I0213 15:34:03.773470 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e047278cfc6f97d5ad4de9940897c98f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" (UID: \"e047278cfc6f97d5ad4de9940897c98f\") " pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773729 kubelet[3354]: I0213 15:34:03.773486 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773729 kubelet[3354]: I0213 15:34:03.773529 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773729 kubelet[3354]: I0213 15:34:03.773552 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c88be74d5b2a9346761621102c71d463-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.1-a-cf53dd3440\" (UID: \"c88be74d5b2a9346761621102c71d463\") " pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.773729 kubelet[3354]: I0213 15:34:03.773631 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/715297f21ed8e2b16f1ee19aba25d303-kubeconfig\") pod \"kube-scheduler-ci-4230.0.1-a-cf53dd3440\" (UID: \"715297f21ed8e2b16f1ee19aba25d303\") " pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:03.961729 sudo[3388]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:34:03.962026 sudo[3388]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:34:04.395484 sudo[3388]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:04.539220 kubelet[3354]: I0213 15:34:04.538934 3354 apiserver.go:52] "Watching apiserver" Feb 13 15:34:04.572532 kubelet[3354]: I0213 15:34:04.572458 3354 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:34:04.609106 kubelet[3354]: I0213 15:34:04.609075 3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:04.618733 kubelet[3354]: W0213 15:34:04.618697 3354 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:34:04.618887 kubelet[3354]: E0213 15:34:04.618774 3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.0.1-a-cf53dd3440\" already exists" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" Feb 13 15:34:04.635723 kubelet[3354]: I0213 15:34:04.634689 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.1-a-cf53dd3440" podStartSLOduration=3.634642426 podStartE2EDuration="3.634642426s" podCreationTimestamp="2025-02-13 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:04.634604586 +0000 UTC m=+1.187863523" watchObservedRunningTime="2025-02-13 15:34:04.634642426 +0000 UTC m=+1.187901363" Feb 13 15:34:04.648327 kubelet[3354]: I0213 15:34:04.648142 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.1-a-cf53dd3440" podStartSLOduration=3.648122995 podStartE2EDuration="3.648122995s" podCreationTimestamp="2025-02-13 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:04.647955115 +0000 UTC m=+1.201214052" watchObservedRunningTime="2025-02-13 15:34:04.648122995 +0000 UTC m=+1.201381932" Feb 13 15:34:04.676078 kubelet[3354]: I0213 15:34:04.675642 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.1-a-cf53dd3440" podStartSLOduration=3.675615975 podStartE2EDuration="3.675615975s" podCreationTimestamp="2025-02-13 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:04.662340165 +0000 UTC m=+1.215599102" watchObservedRunningTime="2025-02-13 15:34:04.675615975 +0000 UTC m=+1.228874872" Feb 13 15:34:06.495446 sudo[2400]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:06.577676 sshd[2397]: Connection closed by 10.200.16.10 port 37466 Feb 13 15:34:06.578326 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:06.583412 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:37466.service: Deactivated successfully. Feb 13 15:34:06.586507 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:34:06.586866 systemd[1]: session-9.scope: Consumed 7.146s CPU time, 260.5M memory peak. Feb 13 15:34:06.588731 systemd-logind[1726]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:34:06.589808 systemd-logind[1726]: Removed session 9. Feb 13 15:34:09.455433 kubelet[3354]: I0213 15:34:09.455395 3354 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:34:09.455917 containerd[1745]: time="2025-02-13T15:34:09.455758193Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:34:09.456103 kubelet[3354]: I0213 15:34:09.455979 3354 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:34:10.340315 systemd[1]: Created slice kubepods-besteffort-pod55abb47f_24ad_4c8c_8fc8_d5b1807b48a3.slice - libcontainer container kubepods-besteffort-pod55abb47f_24ad_4c8c_8fc8_d5b1807b48a3.slice. Feb 13 15:34:10.360126 systemd[1]: Created slice kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice - libcontainer container kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice. Feb 13 15:34:10.418574 kubelet[3354]: I0213 15:34:10.418539 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55abb47f-24ad-4c8c-8fc8-d5b1807b48a3-xtables-lock\") pod \"kube-proxy-4phvs\" (UID: \"55abb47f-24ad-4c8c-8fc8-d5b1807b48a3\") " pod="kube-system/kube-proxy-4phvs" Feb 13 15:34:10.418778 kubelet[3354]: I0213 15:34:10.418761 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-xtables-lock\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.418886 kubelet[3354]: I0213 15:34:10.418865 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vngq\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-kube-api-access-8vngq\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.418933 kubelet[3354]: I0213 15:34:10.418903 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-clustermesh-secrets\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.418933 kubelet[3354]: I0213 15:34:10.418923 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-etc-cni-netd\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.418995 kubelet[3354]: I0213 15:34:10.418940 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-config-path\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.418995 kubelet[3354]: I0213 15:34:10.418959 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55abb47f-24ad-4c8c-8fc8-d5b1807b48a3-lib-modules\") pod \"kube-proxy-4phvs\" (UID: \"55abb47f-24ad-4c8c-8fc8-d5b1807b48a3\") " pod="kube-system/kube-proxy-4phvs" Feb 13 15:34:10.418995 kubelet[3354]: I0213 15:34:10.418974 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hubble-tls\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419057 kubelet[3354]: I0213 15:34:10.418992 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kw49\" (UniqueName: \"kubernetes.io/projected/55abb47f-24ad-4c8c-8fc8-d5b1807b48a3-kube-api-access-6kw49\") pod \"kube-proxy-4phvs\" (UID: \"55abb47f-24ad-4c8c-8fc8-d5b1807b48a3\") " pod="kube-system/kube-proxy-4phvs" Feb 13 15:34:10.419057 kubelet[3354]: I0213 15:34:10.419011 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-bpf-maps\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419057 kubelet[3354]: I0213 15:34:10.419026 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cni-path\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419057 kubelet[3354]: I0213 15:34:10.419044 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hostproc\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419139 kubelet[3354]: I0213 15:34:10.419058 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-cgroup\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419139 kubelet[3354]: I0213 15:34:10.419076 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55abb47f-24ad-4c8c-8fc8-d5b1807b48a3-kube-proxy\") pod \"kube-proxy-4phvs\" (UID: \"55abb47f-24ad-4c8c-8fc8-d5b1807b48a3\") " pod="kube-system/kube-proxy-4phvs" Feb 13 15:34:10.419139 kubelet[3354]: I0213 15:34:10.419102 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-run\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419139 kubelet[3354]: I0213 15:34:10.419118 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-lib-modules\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419139 kubelet[3354]: I0213 15:34:10.419134 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-net\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.419237 kubelet[3354]: I0213 15:34:10.419149 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-kernel\") pod \"cilium-dsmb4\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " pod="kube-system/cilium-dsmb4" Feb 13 15:34:10.561503 systemd[1]: Created slice kubepods-besteffort-poddf370ecf_b3e8_4ee5_9f47_ce946fe78ceb.slice - libcontainer container kubepods-besteffort-poddf370ecf_b3e8_4ee5_9f47_ce946fe78ceb.slice. Feb 13 15:34:10.620716 kubelet[3354]: I0213 15:34:10.620140 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8wlb4\" (UID: \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\") " pod="kube-system/cilium-operator-6c4d7847fc-8wlb4" Feb 13 15:34:10.620716 kubelet[3354]: I0213 15:34:10.620185 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgg6\" (UniqueName: \"kubernetes.io/projected/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-kube-api-access-ksgg6\") pod \"cilium-operator-6c4d7847fc-8wlb4\" (UID: \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\") " pod="kube-system/cilium-operator-6c4d7847fc-8wlb4" Feb 13 15:34:10.656127 containerd[1745]: time="2025-02-13T15:34:10.656074457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4phvs,Uid:55abb47f-24ad-4c8c-8fc8-d5b1807b48a3,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:10.664519 containerd[1745]: time="2025-02-13T15:34:10.664444904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsmb4,Uid:c1bdbe86-7278-4117-9aaa-ed59ed5c356b,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:10.740252 containerd[1745]: time="2025-02-13T15:34:10.740039526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:10.740252 containerd[1745]: time="2025-02-13T15:34:10.740097766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:10.740252 containerd[1745]: time="2025-02-13T15:34:10.740113926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.740576 containerd[1745]: time="2025-02-13T15:34:10.740328446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.747855 containerd[1745]: time="2025-02-13T15:34:10.747734852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:10.748110 containerd[1745]: time="2025-02-13T15:34:10.747871492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:10.748110 containerd[1745]: time="2025-02-13T15:34:10.747908732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.748110 containerd[1745]: time="2025-02-13T15:34:10.748074612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.762491 systemd[1]: Started cri-containerd-1b02553b2a9326772248695d0d21d13e293ffc7af2d67d44d2498a0283fd8511.scope - libcontainer container 1b02553b2a9326772248695d0d21d13e293ffc7af2d67d44d2498a0283fd8511. Feb 13 15:34:10.766961 systemd[1]: Started cri-containerd-d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702.scope - libcontainer container d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702. Feb 13 15:34:10.798024 containerd[1745]: time="2025-02-13T15:34:10.796856572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4phvs,Uid:55abb47f-24ad-4c8c-8fc8-d5b1807b48a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b02553b2a9326772248695d0d21d13e293ffc7af2d67d44d2498a0283fd8511\"" Feb 13 15:34:10.807215 containerd[1745]: time="2025-02-13T15:34:10.807158701Z" level=info msg="CreateContainer within sandbox \"1b02553b2a9326772248695d0d21d13e293ffc7af2d67d44d2498a0283fd8511\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:34:10.812523 containerd[1745]: time="2025-02-13T15:34:10.812149305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsmb4,Uid:c1bdbe86-7278-4117-9aaa-ed59ed5c356b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\"" Feb 13 15:34:10.815424 containerd[1745]: time="2025-02-13T15:34:10.814677147Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:34:10.867282 containerd[1745]: time="2025-02-13T15:34:10.867221270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8wlb4,Uid:df370ecf-b3e8-4ee5-9f47-ce946fe78ceb,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:10.874033 containerd[1745]: time="2025-02-13T15:34:10.873480395Z" level=info msg="CreateContainer within sandbox \"1b02553b2a9326772248695d0d21d13e293ffc7af2d67d44d2498a0283fd8511\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c182b44ae76487b3fc34d81cf83ae9195c53517c94060d2371e5f2fd5718379e\"" Feb 13 15:34:10.874820 containerd[1745]: time="2025-02-13T15:34:10.874746556Z" level=info msg="StartContainer for \"c182b44ae76487b3fc34d81cf83ae9195c53517c94060d2371e5f2fd5718379e\"" Feb 13 15:34:10.899504 systemd[1]: Started cri-containerd-c182b44ae76487b3fc34d81cf83ae9195c53517c94060d2371e5f2fd5718379e.scope - libcontainer container c182b44ae76487b3fc34d81cf83ae9195c53517c94060d2371e5f2fd5718379e. Feb 13 15:34:10.939410 containerd[1745]: time="2025-02-13T15:34:10.937087087Z" level=info msg="StartContainer for \"c182b44ae76487b3fc34d81cf83ae9195c53517c94060d2371e5f2fd5718379e\" returns successfully" Feb 13 15:34:10.945699 containerd[1745]: time="2025-02-13T15:34:10.945586134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:10.946596 containerd[1745]: time="2025-02-13T15:34:10.946479855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:10.946792 containerd[1745]: time="2025-02-13T15:34:10.946580855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.946792 containerd[1745]: time="2025-02-13T15:34:10.946688335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:10.969598 systemd[1]: Started cri-containerd-98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4.scope - libcontainer container 98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4. Feb 13 15:34:11.012009 containerd[1745]: time="2025-02-13T15:34:11.011939949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8wlb4,Uid:df370ecf-b3e8-4ee5-9f47-ce946fe78ceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\"" Feb 13 15:34:14.973539 kubelet[3354]: I0213 15:34:14.973382 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4phvs" podStartSLOduration=4.973340912 podStartE2EDuration="4.973340912s" podCreationTimestamp="2025-02-13 15:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:11.634253458 +0000 UTC m=+8.187512395" watchObservedRunningTime="2025-02-13 15:34:14.973340912 +0000 UTC m=+11.526599849" Feb 13 15:34:15.522612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674275618.mount: Deactivated successfully. Feb 13 15:34:17.773315 containerd[1745]: time="2025-02-13T15:34:17.772876139Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:17.779815 containerd[1745]: time="2025-02-13T15:34:17.779594625Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:34:17.783945 containerd[1745]: time="2025-02-13T15:34:17.783885188Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:17.785672 containerd[1745]: time="2025-02-13T15:34:17.785537270Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.970816443s" Feb 13 15:34:17.785672 containerd[1745]: time="2025-02-13T15:34:17.785580670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:34:17.787402 containerd[1745]: time="2025-02-13T15:34:17.787353031Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:34:17.789432 containerd[1745]: time="2025-02-13T15:34:17.789188153Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:34:17.826480 containerd[1745]: time="2025-02-13T15:34:17.826431103Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\"" Feb 13 15:34:17.827459 containerd[1745]: time="2025-02-13T15:34:17.827368463Z" level=info msg="StartContainer for \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\"" Feb 13 15:34:17.859521 systemd[1]: Started cri-containerd-ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250.scope - libcontainer container ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250. Feb 13 15:34:17.890297 containerd[1745]: time="2025-02-13T15:34:17.890228434Z" level=info msg="StartContainer for \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\" returns successfully" Feb 13 15:34:17.900193 systemd[1]: cri-containerd-ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250.scope: Deactivated successfully. Feb 13 15:34:17.918924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250-rootfs.mount: Deactivated successfully. Feb 13 15:34:19.054788 containerd[1745]: time="2025-02-13T15:34:19.054697178Z" level=info msg="shim disconnected" id=ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250 namespace=k8s.io Feb 13 15:34:19.054788 containerd[1745]: time="2025-02-13T15:34:19.054778298Z" level=warning msg="cleaning up after shim disconnected" id=ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250 namespace=k8s.io Feb 13 15:34:19.054788 containerd[1745]: time="2025-02-13T15:34:19.054786738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:19.644376 containerd[1745]: time="2025-02-13T15:34:19.644132775Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:34:19.695367 containerd[1745]: time="2025-02-13T15:34:19.695106256Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\"" Feb 13 15:34:19.696249 containerd[1745]: time="2025-02-13T15:34:19.695809057Z" level=info msg="StartContainer for \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\"" Feb 13 15:34:19.740545 systemd[1]: Started cri-containerd-3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e.scope - libcontainer container 3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e. Feb 13 15:34:19.772760 containerd[1745]: time="2025-02-13T15:34:19.772697399Z" level=info msg="StartContainer for \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" returns successfully" Feb 13 15:34:19.782383 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:34:19.782738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:19.782929 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:19.792019 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:19.795845 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:34:19.796440 systemd[1]: cri-containerd-3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e.scope: Deactivated successfully. Feb 13 15:34:19.798452 containerd[1745]: time="2025-02-13T15:34:19.798371940Z" level=error msg="failed to handle container TaskExit event container_id:\"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" id:\"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" pid:3822 exited_at:{seconds:1739460859 nanos:786779091}" error="failed to stop container: unknown error after kill: runc did not terminate successfully: exit status 1: read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice/cri-containerd-3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e.scope/cgroup.freeze: no such device\n: unknown" Feb 13 15:34:19.813194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:20.945028 containerd[1745]: time="2025-02-13T15:34:20.944969949Z" level=info msg="TaskExit event container_id:\"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" id:\"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" pid:3822 exited_at:{seconds:1739460859 nanos:786779091}" Feb 13 15:34:20.964712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e-rootfs.mount: Deactivated successfully. Feb 13 15:34:20.980518 containerd[1745]: time="2025-02-13T15:34:20.980287617Z" level=info msg="shim disconnected" id=3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e namespace=k8s.io Feb 13 15:34:20.980518 containerd[1745]: time="2025-02-13T15:34:20.980346777Z" level=warning msg="cleaning up after shim disconnected" id=3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e namespace=k8s.io Feb 13 15:34:20.980518 containerd[1745]: time="2025-02-13T15:34:20.980356737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:21.650995 containerd[1745]: time="2025-02-13T15:34:21.650722560Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:34:21.712209 containerd[1745]: time="2025-02-13T15:34:21.712147090Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\"" Feb 13 15:34:21.714251 containerd[1745]: time="2025-02-13T15:34:21.712842251Z" level=info msg="StartContainer for \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\"" Feb 13 15:34:21.749521 systemd[1]: Started cri-containerd-83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f.scope - libcontainer container 83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f. Feb 13 15:34:21.783545 systemd[1]: cri-containerd-83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f.scope: Deactivated successfully. Feb 13 15:34:21.786020 containerd[1745]: time="2025-02-13T15:34:21.785689190Z" level=info msg="StartContainer for \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\" returns successfully" Feb 13 15:34:21.821127 containerd[1745]: time="2025-02-13T15:34:21.820934138Z" level=info msg="shim disconnected" id=83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f namespace=k8s.io Feb 13 15:34:21.821127 containerd[1745]: time="2025-02-13T15:34:21.820996418Z" level=warning msg="cleaning up after shim disconnected" id=83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f namespace=k8s.io Feb 13 15:34:21.821127 containerd[1745]: time="2025-02-13T15:34:21.821005538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:22.654797 containerd[1745]: time="2025-02-13T15:34:22.654602014Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:34:22.693362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f-rootfs.mount: Deactivated successfully. Feb 13 15:34:22.703663 containerd[1745]: time="2025-02-13T15:34:22.703556810Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\"" Feb 13 15:34:22.704316 containerd[1745]: time="2025-02-13T15:34:22.704182170Z" level=info msg="StartContainer for \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\"" Feb 13 15:34:22.733505 systemd[1]: Started cri-containerd-f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0.scope - libcontainer container f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0. Feb 13 15:34:22.756063 systemd[1]: cri-containerd-f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0.scope: Deactivated successfully. Feb 13 15:34:22.762601 containerd[1745]: time="2025-02-13T15:34:22.762536451Z" level=info msg="StartContainer for \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\" returns successfully" Feb 13 15:34:22.782072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0-rootfs.mount: Deactivated successfully. Feb 13 15:34:22.799158 containerd[1745]: time="2025-02-13T15:34:22.799087837Z" level=info msg="shim disconnected" id=f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0 namespace=k8s.io Feb 13 15:34:22.799158 containerd[1745]: time="2025-02-13T15:34:22.799148197Z" level=warning msg="cleaning up after shim disconnected" id=f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0 namespace=k8s.io Feb 13 15:34:22.799158 containerd[1745]: time="2025-02-13T15:34:22.799156557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:23.666790 containerd[1745]: time="2025-02-13T15:34:23.666646763Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:34:23.695709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193707909.mount: Deactivated successfully. Feb 13 15:34:23.733156 containerd[1745]: time="2025-02-13T15:34:23.732575089Z" level=info msg="CreateContainer within sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\"" Feb 13 15:34:23.733837 containerd[1745]: time="2025-02-13T15:34:23.733790089Z" level=info msg="StartContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\"" Feb 13 15:34:23.768497 systemd[1]: Started cri-containerd-38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd.scope - libcontainer container 38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd. Feb 13 15:34:23.804354 containerd[1745]: time="2025-02-13T15:34:23.804303219Z" level=info msg="StartContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" returns successfully" Feb 13 15:34:23.943951 kubelet[3354]: I0213 15:34:23.942744 3354 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 15:34:24.036618 kubelet[3354]: I0213 15:34:24.036559 3354 status_manager.go:890] "Failed to get status for pod" podUID="c43f24d2-5570-4239-bfeb-2b7a4c77b9f4" pod="kube-system/coredns-668d6bf9bc-99vmr" err="pods \"coredns-668d6bf9bc-99vmr\" is forbidden: User \"system:node:ci-4230.0.1-a-cf53dd3440\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.0.1-a-cf53dd3440' and this object" Feb 13 15:34:24.043871 kubelet[3354]: W0213 15:34:24.043774 3354 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230.0.1-a-cf53dd3440" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.0.1-a-cf53dd3440' and this object Feb 13 15:34:24.043871 kubelet[3354]: E0213 15:34:24.043835 3354 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230.0.1-a-cf53dd3440\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.0.1-a-cf53dd3440' and this object" logger="UnhandledError" Feb 13 15:34:24.050125 systemd[1]: Created slice kubepods-burstable-podc43f24d2_5570_4239_bfeb_2b7a4c77b9f4.slice - libcontainer container kubepods-burstable-podc43f24d2_5570_4239_bfeb_2b7a4c77b9f4.slice. Feb 13 15:34:24.061779 systemd[1]: Created slice kubepods-burstable-pode3782a9e_bd44_4d2b_b590_70603968bdb3.slice - libcontainer container kubepods-burstable-pode3782a9e_bd44_4d2b_b590_70603968bdb3.slice. Feb 13 15:34:24.118144 kubelet[3354]: I0213 15:34:24.118002 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b5l\" (UniqueName: \"kubernetes.io/projected/c43f24d2-5570-4239-bfeb-2b7a4c77b9f4-kube-api-access-k6b5l\") pod \"coredns-668d6bf9bc-99vmr\" (UID: \"c43f24d2-5570-4239-bfeb-2b7a4c77b9f4\") " pod="kube-system/coredns-668d6bf9bc-99vmr" Feb 13 15:34:24.118825 kubelet[3354]: I0213 15:34:24.118529 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3782a9e-bd44-4d2b-b590-70603968bdb3-config-volume\") pod \"coredns-668d6bf9bc-glpqp\" (UID: \"e3782a9e-bd44-4d2b-b590-70603968bdb3\") " pod="kube-system/coredns-668d6bf9bc-glpqp" Feb 13 15:34:24.118825 kubelet[3354]: I0213 15:34:24.118669 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c43f24d2-5570-4239-bfeb-2b7a4c77b9f4-config-volume\") pod \"coredns-668d6bf9bc-99vmr\" (UID: \"c43f24d2-5570-4239-bfeb-2b7a4c77b9f4\") " pod="kube-system/coredns-668d6bf9bc-99vmr" Feb 13 15:34:24.118825 kubelet[3354]: I0213 15:34:24.118698 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9278\" (UniqueName: \"kubernetes.io/projected/e3782a9e-bd44-4d2b-b590-70603968bdb3-kube-api-access-l9278\") pod \"coredns-668d6bf9bc-glpqp\" (UID: \"e3782a9e-bd44-4d2b-b590-70603968bdb3\") " pod="kube-system/coredns-668d6bf9bc-glpqp" Feb 13 15:34:24.441261 containerd[1745]: time="2025-02-13T15:34:24.440409423Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:24.445484 containerd[1745]: time="2025-02-13T15:34:24.445404187Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:34:24.452358 containerd[1745]: time="2025-02-13T15:34:24.452262071Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:24.454341 containerd[1745]: time="2025-02-13T15:34:24.454115113Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.666719402s" Feb 13 15:34:24.454341 containerd[1745]: time="2025-02-13T15:34:24.454163433Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:34:24.459566 containerd[1745]: time="2025-02-13T15:34:24.459484156Z" level=info msg="CreateContainer within sandbox \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:34:24.505412 containerd[1745]: time="2025-02-13T15:34:24.505358228Z" level=info msg="CreateContainer within sandbox \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\"" Feb 13 15:34:24.506234 containerd[1745]: time="2025-02-13T15:34:24.506191229Z" level=info msg="StartContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\"" Feb 13 15:34:24.533513 systemd[1]: Started cri-containerd-f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40.scope - libcontainer container f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40. Feb 13 15:34:24.563639 containerd[1745]: time="2025-02-13T15:34:24.563573349Z" level=info msg="StartContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" returns successfully" Feb 13 15:34:24.716829 kubelet[3354]: I0213 15:34:24.716662 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dsmb4" podStartSLOduration=7.744046892 podStartE2EDuration="14.716642776s" podCreationTimestamp="2025-02-13 15:34:10 +0000 UTC" firstStartedPulling="2025-02-13 15:34:10.814089346 +0000 UTC m=+7.367348283" lastFinishedPulling="2025-02-13 15:34:17.78668527 +0000 UTC m=+14.339944167" observedRunningTime="2025-02-13 15:34:24.713428574 +0000 UTC m=+21.266687511" watchObservedRunningTime="2025-02-13 15:34:24.716642776 +0000 UTC m=+21.269901713" Feb 13 15:34:24.955956 containerd[1745]: time="2025-02-13T15:34:24.955551423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99vmr,Uid:c43f24d2-5570-4239-bfeb-2b7a4c77b9f4,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:24.968189 containerd[1745]: time="2025-02-13T15:34:24.968069272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glpqp,Uid:e3782a9e-bd44-4d2b-b590-70603968bdb3,Namespace:kube-system,Attempt:0,}" Feb 13 15:34:27.472506 systemd-networkd[1348]: cilium_host: Link UP Feb 13 15:34:27.472621 systemd-networkd[1348]: cilium_net: Link UP Feb 13 15:34:27.472624 systemd-networkd[1348]: cilium_net: Gained carrier Feb 13 15:34:27.472737 systemd-networkd[1348]: cilium_host: Gained carrier Feb 13 15:34:27.472857 systemd-networkd[1348]: cilium_host: Gained IPv6LL Feb 13 15:34:27.695623 systemd-networkd[1348]: cilium_vxlan: Link UP Feb 13 15:34:27.695631 systemd-networkd[1348]: cilium_vxlan: Gained carrier Feb 13 15:34:28.006287 kernel: NET: Registered PF_ALG protocol family Feb 13 15:34:28.165640 systemd-networkd[1348]: cilium_net: Gained IPv6LL Feb 13 15:34:28.893633 systemd-networkd[1348]: lxc_health: Link UP Feb 13 15:34:28.896505 systemd-networkd[1348]: lxc_health: Gained carrier Feb 13 15:34:29.051820 systemd-networkd[1348]: lxc2eb4414475f3: Link UP Feb 13 15:34:29.067453 kernel: eth0: renamed from tmpf3eb4 Feb 13 15:34:29.071874 systemd-networkd[1348]: lxc2eb4414475f3: Gained carrier Feb 13 15:34:29.107305 kernel: eth0: renamed from tmp92c4d Feb 13 15:34:29.114397 systemd-networkd[1348]: lxcfe105c77832d: Link UP Feb 13 15:34:29.115797 systemd-networkd[1348]: lxcfe105c77832d: Gained carrier Feb 13 15:34:29.317418 systemd-networkd[1348]: cilium_vxlan: Gained IPv6LL Feb 13 15:34:29.957498 systemd-networkd[1348]: lxc_health: Gained IPv6LL Feb 13 15:34:30.533431 systemd-networkd[1348]: lxc2eb4414475f3: Gained IPv6LL Feb 13 15:34:30.689780 kubelet[3354]: I0213 15:34:30.689683 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8wlb4" podStartSLOduration=7.248644466 podStartE2EDuration="20.689666788s" podCreationTimestamp="2025-02-13 15:34:10 +0000 UTC" firstStartedPulling="2025-02-13 15:34:11.014349351 +0000 UTC m=+7.567608288" lastFinishedPulling="2025-02-13 15:34:24.455371673 +0000 UTC m=+21.008630610" observedRunningTime="2025-02-13 15:34:24.740750153 +0000 UTC m=+21.294009130" watchObservedRunningTime="2025-02-13 15:34:30.689666788 +0000 UTC m=+27.242925725" Feb 13 15:34:31.045516 systemd-networkd[1348]: lxcfe105c77832d: Gained IPv6LL Feb 13 15:34:33.101850 containerd[1745]: time="2025-02-13T15:34:33.101703695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:33.102305 containerd[1745]: time="2025-02-13T15:34:33.101824855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:33.102305 containerd[1745]: time="2025-02-13T15:34:33.101841495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:33.104495 containerd[1745]: time="2025-02-13T15:34:33.101935855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:33.139533 systemd[1]: Started cri-containerd-f3eb4e7bb0b251dedebda367178b6f87111d49d3b84105dfda39ef4eabbaacb1.scope - libcontainer container f3eb4e7bb0b251dedebda367178b6f87111d49d3b84105dfda39ef4eabbaacb1. Feb 13 15:34:33.153300 containerd[1745]: time="2025-02-13T15:34:33.153132336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:34:33.153300 containerd[1745]: time="2025-02-13T15:34:33.153246136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:34:33.153823 containerd[1745]: time="2025-02-13T15:34:33.153259936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:33.153921 containerd[1745]: time="2025-02-13T15:34:33.153883177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:34:33.178549 systemd[1]: Started cri-containerd-92c4d0f2e4bd3a5fb880c8edc41ad851ef2d12c16d2fb8b788d9bad57776b70e.scope - libcontainer container 92c4d0f2e4bd3a5fb880c8edc41ad851ef2d12c16d2fb8b788d9bad57776b70e. Feb 13 15:34:33.223322 containerd[1745]: time="2025-02-13T15:34:33.222511432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99vmr,Uid:c43f24d2-5570-4239-bfeb-2b7a4c77b9f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3eb4e7bb0b251dedebda367178b6f87111d49d3b84105dfda39ef4eabbaacb1\"" Feb 13 15:34:33.227543 containerd[1745]: time="2025-02-13T15:34:33.227488636Z" level=info msg="CreateContainer within sandbox \"f3eb4e7bb0b251dedebda367178b6f87111d49d3b84105dfda39ef4eabbaacb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:33.251325 containerd[1745]: time="2025-02-13T15:34:33.250207495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glpqp,Uid:e3782a9e-bd44-4d2b-b590-70603968bdb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"92c4d0f2e4bd3a5fb880c8edc41ad851ef2d12c16d2fb8b788d9bad57776b70e\"" Feb 13 15:34:33.255417 containerd[1745]: time="2025-02-13T15:34:33.255310139Z" level=info msg="CreateContainer within sandbox \"92c4d0f2e4bd3a5fb880c8edc41ad851ef2d12c16d2fb8b788d9bad57776b70e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:34:33.298454 containerd[1745]: time="2025-02-13T15:34:33.298339893Z" level=info msg="CreateContainer within sandbox \"f3eb4e7bb0b251dedebda367178b6f87111d49d3b84105dfda39ef4eabbaacb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc7d73b306c74400d415e8e1874bbe150460c0a3c50d1ea225402a09f6194c0\"" Feb 13 15:34:33.300667 containerd[1745]: time="2025-02-13T15:34:33.300617975Z" level=info msg="StartContainer for \"0fc7d73b306c74400d415e8e1874bbe150460c0a3c50d1ea225402a09f6194c0\"" Feb 13 15:34:33.325851 containerd[1745]: time="2025-02-13T15:34:33.325782996Z" level=info msg="CreateContainer within sandbox \"92c4d0f2e4bd3a5fb880c8edc41ad851ef2d12c16d2fb8b788d9bad57776b70e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fddaded2f9c85a11ff70ae15e32d0c4a2c0b0d49de5a74ceca4cedb8cebd2678\"" Feb 13 15:34:33.326870 containerd[1745]: time="2025-02-13T15:34:33.326808756Z" level=info msg="StartContainer for \"fddaded2f9c85a11ff70ae15e32d0c4a2c0b0d49de5a74ceca4cedb8cebd2678\"" Feb 13 15:34:33.328520 systemd[1]: Started cri-containerd-0fc7d73b306c74400d415e8e1874bbe150460c0a3c50d1ea225402a09f6194c0.scope - libcontainer container 0fc7d73b306c74400d415e8e1874bbe150460c0a3c50d1ea225402a09f6194c0. Feb 13 15:34:33.357709 systemd[1]: Started cri-containerd-fddaded2f9c85a11ff70ae15e32d0c4a2c0b0d49de5a74ceca4cedb8cebd2678.scope - libcontainer container fddaded2f9c85a11ff70ae15e32d0c4a2c0b0d49de5a74ceca4cedb8cebd2678. Feb 13 15:34:33.370951 containerd[1745]: time="2025-02-13T15:34:33.370894152Z" level=info msg="StartContainer for \"0fc7d73b306c74400d415e8e1874bbe150460c0a3c50d1ea225402a09f6194c0\" returns successfully" Feb 13 15:34:33.399992 containerd[1745]: time="2025-02-13T15:34:33.399940455Z" level=info msg="StartContainer for \"fddaded2f9c85a11ff70ae15e32d0c4a2c0b0d49de5a74ceca4cedb8cebd2678\" returns successfully" Feb 13 15:34:33.762553 kubelet[3354]: I0213 15:34:33.762372 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-glpqp" podStartSLOduration=23.762349388 podStartE2EDuration="23.762349388s" podCreationTimestamp="2025-02-13 15:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:33.722891236 +0000 UTC m=+30.276150173" watchObservedRunningTime="2025-02-13 15:34:33.762349388 +0000 UTC m=+30.315608325" Feb 13 15:34:33.813551 kubelet[3354]: I0213 15:34:33.813465 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-99vmr" podStartSLOduration=23.813441589 podStartE2EDuration="23.813441589s" podCreationTimestamp="2025-02-13 15:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:34:33.763740789 +0000 UTC m=+30.316999726" watchObservedRunningTime="2025-02-13 15:34:33.813441589 +0000 UTC m=+30.366700526" Feb 13 15:34:38.696785 kubelet[3354]: I0213 15:34:38.696724 3354 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:36:43.961712 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:50540.service - OpenSSH per-connection server daemon (10.200.16.10:50540). Feb 13 15:36:44.386671 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 50540 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:36:44.388152 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:44.392497 systemd-logind[1726]: New session 10 of user core. Feb 13 15:36:44.396473 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:36:44.759065 sshd[4762]: Connection closed by 10.200.16.10 port 50540 Feb 13 15:36:44.759422 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:44.764138 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:50540.service: Deactivated successfully. Feb 13 15:36:44.766413 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:36:44.767339 systemd-logind[1726]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:36:44.768698 systemd-logind[1726]: Removed session 10. Feb 13 15:36:49.848594 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:43612.service - OpenSSH per-connection server daemon (10.200.16.10:43612). Feb 13 15:36:50.265977 sshd[4775]: Accepted publickey for core from 10.200.16.10 port 43612 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:36:50.267532 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:50.274472 systemd-logind[1726]: New session 11 of user core. Feb 13 15:36:50.278486 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:36:50.651419 sshd[4777]: Connection closed by 10.200.16.10 port 43612 Feb 13 15:36:50.652355 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:50.655740 systemd-logind[1726]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:36:50.656037 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:43612.service: Deactivated successfully. Feb 13 15:36:50.658727 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:36:50.661988 systemd-logind[1726]: Removed session 11. Feb 13 15:36:55.729207 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:43624.service - OpenSSH per-connection server daemon (10.200.16.10:43624). Feb 13 15:36:56.155679 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 43624 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:36:56.156880 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:56.161725 systemd-logind[1726]: New session 12 of user core. Feb 13 15:36:56.167484 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:36:56.537474 sshd[4792]: Connection closed by 10.200.16.10 port 43624 Feb 13 15:36:56.537899 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:56.542487 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:43624.service: Deactivated successfully. Feb 13 15:36:56.544743 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:36:56.545743 systemd-logind[1726]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:36:56.548904 systemd-logind[1726]: Removed session 12. Feb 13 15:37:01.619623 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:56836.service - OpenSSH per-connection server daemon (10.200.16.10:56836). Feb 13 15:37:02.042304 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 56836 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:02.043746 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:02.050853 systemd-logind[1726]: New session 13 of user core. Feb 13 15:37:02.058497 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:37:02.451934 sshd[4807]: Connection closed by 10.200.16.10 port 56836 Feb 13 15:37:02.450863 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:02.454389 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:56836.service: Deactivated successfully. Feb 13 15:37:02.456234 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:37:02.458950 systemd-logind[1726]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:37:02.460464 systemd-logind[1726]: Removed session 13. Feb 13 15:37:02.539592 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:56840.service - OpenSSH per-connection server daemon (10.200.16.10:56840). Feb 13 15:37:02.990342 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 56840 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:02.991837 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:02.997460 systemd-logind[1726]: New session 14 of user core. Feb 13 15:37:03.005595 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:37:03.423143 sshd[4822]: Connection closed by 10.200.16.10 port 56840 Feb 13 15:37:03.422103 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:03.426533 systemd-logind[1726]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:37:03.426545 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:56840.service: Deactivated successfully. Feb 13 15:37:03.430210 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:37:03.431762 systemd-logind[1726]: Removed session 14. Feb 13 15:37:03.500102 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:56844.service - OpenSSH per-connection server daemon (10.200.16.10:56844). Feb 13 15:37:03.922888 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 56844 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:03.924454 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:03.929783 systemd-logind[1726]: New session 15 of user core. Feb 13 15:37:03.934485 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:37:04.310318 sshd[4836]: Connection closed by 10.200.16.10 port 56844 Feb 13 15:37:04.311471 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:04.314675 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:56844.service: Deactivated successfully. Feb 13 15:37:04.317237 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:37:04.319481 systemd-logind[1726]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:37:04.320927 systemd-logind[1726]: Removed session 15. Feb 13 15:37:09.390607 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:43312.service - OpenSSH per-connection server daemon (10.200.16.10:43312). Feb 13 15:37:09.803704 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 43312 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:09.805134 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:09.810465 systemd-logind[1726]: New session 16 of user core. Feb 13 15:37:09.818677 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:37:10.189844 sshd[4850]: Connection closed by 10.200.16.10 port 43312 Feb 13 15:37:10.189247 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:10.193648 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:43312.service: Deactivated successfully. Feb 13 15:37:10.196328 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:37:10.197598 systemd-logind[1726]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:37:10.198553 systemd-logind[1726]: Removed session 16. Feb 13 15:37:15.272725 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:43326.service - OpenSSH per-connection server daemon (10.200.16.10:43326). Feb 13 15:37:15.689758 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 43326 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:15.691755 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:15.696765 systemd-logind[1726]: New session 17 of user core. Feb 13 15:37:15.700450 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:16.077350 sshd[4866]: Connection closed by 10.200.16.10 port 43326 Feb 13 15:37:16.077221 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:16.080518 systemd-logind[1726]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:16.080722 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:43326.service: Deactivated successfully. Feb 13 15:37:16.083418 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:16.085691 systemd-logind[1726]: Removed session 17. Feb 13 15:37:16.167604 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:43338.service - OpenSSH per-connection server daemon (10.200.16.10:43338). Feb 13 15:37:16.621670 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 43338 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:16.623102 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:16.627946 systemd-logind[1726]: New session 18 of user core. Feb 13 15:37:16.634476 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:37:17.050368 sshd[4880]: Connection closed by 10.200.16.10 port 43338 Feb 13 15:37:17.051224 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:17.054727 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:43338.service: Deactivated successfully. Feb 13 15:37:17.057974 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:37:17.059100 systemd-logind[1726]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:37:17.060060 systemd-logind[1726]: Removed session 18. Feb 13 15:37:17.136591 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:43348.service - OpenSSH per-connection server daemon (10.200.16.10:43348). Feb 13 15:37:17.585012 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 43348 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:17.586491 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:17.592651 systemd-logind[1726]: New session 19 of user core. Feb 13 15:37:17.596446 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:37:18.597871 sshd[4892]: Connection closed by 10.200.16.10 port 43348 Feb 13 15:37:18.598566 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:18.602490 systemd-logind[1726]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:37:18.603400 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:43348.service: Deactivated successfully. Feb 13 15:37:18.607129 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:37:18.608325 systemd-logind[1726]: Removed session 19. Feb 13 15:37:18.681632 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:43360.service - OpenSSH per-connection server daemon (10.200.16.10:43360). Feb 13 15:37:19.097314 sshd[4909]: Accepted publickey for core from 10.200.16.10 port 43360 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:19.098850 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:19.104425 systemd-logind[1726]: New session 20 of user core. Feb 13 15:37:19.108709 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:37:19.598203 sshd[4911]: Connection closed by 10.200.16.10 port 43360 Feb 13 15:37:19.598899 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:19.602432 systemd-logind[1726]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:37:19.604736 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:43360.service: Deactivated successfully. Feb 13 15:37:19.608356 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:37:19.611051 systemd-logind[1726]: Removed session 20. Feb 13 15:37:19.683585 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:45334.service - OpenSSH per-connection server daemon (10.200.16.10:45334). Feb 13 15:37:20.096376 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 45334 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:20.097752 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:20.101939 systemd-logind[1726]: New session 21 of user core. Feb 13 15:37:20.109493 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:37:20.465180 sshd[4923]: Connection closed by 10.200.16.10 port 45334 Feb 13 15:37:20.467797 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:20.471640 systemd-logind[1726]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:37:20.472148 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:45334.service: Deactivated successfully. Feb 13 15:37:20.475292 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:37:20.477306 systemd-logind[1726]: Removed session 21. Feb 13 15:37:25.564761 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:45336.service - OpenSSH per-connection server daemon (10.200.16.10:45336). Feb 13 15:37:26.028098 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 45336 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:26.029706 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:26.034335 systemd-logind[1726]: New session 22 of user core. Feb 13 15:37:26.044582 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:37:26.409000 sshd[4940]: Connection closed by 10.200.16.10 port 45336 Feb 13 15:37:26.409577 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:26.413704 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:45336.service: Deactivated successfully. Feb 13 15:37:26.416129 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:37:26.417176 systemd-logind[1726]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:37:26.418254 systemd-logind[1726]: Removed session 22. Feb 13 15:37:31.495569 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:60886.service - OpenSSH per-connection server daemon (10.200.16.10:60886). Feb 13 15:37:31.945754 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 60886 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:31.947297 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:31.956715 systemd-logind[1726]: New session 23 of user core. Feb 13 15:37:31.962505 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:37:32.334633 sshd[4958]: Connection closed by 10.200.16.10 port 60886 Feb 13 15:37:32.336183 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:32.340801 systemd-logind[1726]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:37:32.341516 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:60886.service: Deactivated successfully. Feb 13 15:37:32.344582 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:37:32.346121 systemd-logind[1726]: Removed session 23. Feb 13 15:37:37.424564 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:60896.service - OpenSSH per-connection server daemon (10.200.16.10:60896). Feb 13 15:37:37.834777 sshd[4969]: Accepted publickey for core from 10.200.16.10 port 60896 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:37.836129 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:37.841449 systemd-logind[1726]: New session 24 of user core. Feb 13 15:37:37.850474 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:37:38.199291 sshd[4971]: Connection closed by 10.200.16.10 port 60896 Feb 13 15:37:38.199990 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:38.203864 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:60896.service: Deactivated successfully. Feb 13 15:37:38.205945 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:37:38.206732 systemd-logind[1726]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:37:38.208948 systemd-logind[1726]: Removed session 24. Feb 13 15:37:38.286569 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:60908.service - OpenSSH per-connection server daemon (10.200.16.10:60908). Feb 13 15:37:38.696219 sshd[4983]: Accepted publickey for core from 10.200.16.10 port 60908 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:38.697581 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:38.702991 systemd-logind[1726]: New session 25 of user core. Feb 13 15:37:38.710495 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:37:41.189633 containerd[1745]: time="2025-02-13T15:37:41.189582300Z" level=info msg="StopContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" with timeout 30 (s)" Feb 13 15:37:41.190184 containerd[1745]: time="2025-02-13T15:37:41.189946420Z" level=info msg="Stop container \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" with signal terminated" Feb 13 15:37:41.233122 containerd[1745]: time="2025-02-13T15:37:41.233066042Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:37:41.253507 containerd[1745]: time="2025-02-13T15:37:41.253245532Z" level=info msg="StopContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" with timeout 2 (s)" Feb 13 15:37:41.253940 containerd[1745]: time="2025-02-13T15:37:41.253884492Z" level=info msg="Stop container \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" with signal terminated" Feb 13 15:37:41.260326 systemd[1]: cri-containerd-f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40.scope: Deactivated successfully. Feb 13 15:37:41.271079 systemd-networkd[1348]: lxc_health: Link DOWN Feb 13 15:37:41.271454 systemd-networkd[1348]: lxc_health: Lost carrier Feb 13 15:37:41.292926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40-rootfs.mount: Deactivated successfully. Feb 13 15:37:41.295704 systemd[1]: cri-containerd-38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd.scope: Deactivated successfully. Feb 13 15:37:41.296278 systemd[1]: cri-containerd-38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd.scope: Consumed 7.049s CPU time, 124.7M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:37:41.318166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd-rootfs.mount: Deactivated successfully. Feb 13 15:37:41.349872 containerd[1745]: time="2025-02-13T15:37:41.349705820Z" level=info msg="shim disconnected" id=38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd namespace=k8s.io Feb 13 15:37:41.349872 containerd[1745]: time="2025-02-13T15:37:41.349865020Z" level=warning msg="cleaning up after shim disconnected" id=38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd namespace=k8s.io Feb 13 15:37:41.349872 containerd[1745]: time="2025-02-13T15:37:41.349877900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:41.350363 containerd[1745]: time="2025-02-13T15:37:41.350315621Z" level=info msg="shim disconnected" id=f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40 namespace=k8s.io Feb 13 15:37:41.351379 containerd[1745]: time="2025-02-13T15:37:41.350415141Z" level=warning msg="cleaning up after shim disconnected" id=f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40 namespace=k8s.io Feb 13 15:37:41.351379 containerd[1745]: time="2025-02-13T15:37:41.350426061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:41.371112 containerd[1745]: time="2025-02-13T15:37:41.371063751Z" level=info msg="StopContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" returns successfully" Feb 13 15:37:41.372055 containerd[1745]: time="2025-02-13T15:37:41.372026391Z" level=info msg="StopPodSandbox for \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\"" Feb 13 15:37:41.372326 containerd[1745]: time="2025-02-13T15:37:41.372307512Z" level=info msg="Container to stop \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.374727 containerd[1745]: time="2025-02-13T15:37:41.374693873Z" level=info msg="StopContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" returns successfully" Feb 13 15:37:41.375459 containerd[1745]: time="2025-02-13T15:37:41.375426913Z" level=info msg="StopPodSandbox for \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\"" Feb 13 15:37:41.375525 containerd[1745]: time="2025-02-13T15:37:41.375462473Z" level=info msg="Container to stop \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.375525 containerd[1745]: time="2025-02-13T15:37:41.375474593Z" level=info msg="Container to stop \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.375525 containerd[1745]: time="2025-02-13T15:37:41.375484593Z" level=info msg="Container to stop \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.375525 containerd[1745]: time="2025-02-13T15:37:41.375495313Z" level=info msg="Container to stop \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.375525 containerd[1745]: time="2025-02-13T15:37:41.375504273Z" level=info msg="Container to stop \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:37:41.376023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4-shm.mount: Deactivated successfully. Feb 13 15:37:41.380911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702-shm.mount: Deactivated successfully. Feb 13 15:37:41.384428 systemd[1]: cri-containerd-98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4.scope: Deactivated successfully. Feb 13 15:37:41.388803 systemd[1]: cri-containerd-d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702.scope: Deactivated successfully. Feb 13 15:37:41.428614 containerd[1745]: time="2025-02-13T15:37:41.428491420Z" level=info msg="shim disconnected" id=d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702 namespace=k8s.io Feb 13 15:37:41.428614 containerd[1745]: time="2025-02-13T15:37:41.428553420Z" level=warning msg="cleaning up after shim disconnected" id=d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702 namespace=k8s.io Feb 13 15:37:41.428614 containerd[1745]: time="2025-02-13T15:37:41.428563060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:41.429281 containerd[1745]: time="2025-02-13T15:37:41.429136780Z" level=info msg="shim disconnected" id=98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4 namespace=k8s.io Feb 13 15:37:41.429281 containerd[1745]: time="2025-02-13T15:37:41.429178940Z" level=warning msg="cleaning up after shim disconnected" id=98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4 namespace=k8s.io Feb 13 15:37:41.429281 containerd[1745]: time="2025-02-13T15:37:41.429188420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:41.445067 containerd[1745]: time="2025-02-13T15:37:41.443884067Z" level=info msg="TearDown network for sandbox \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\" successfully" Feb 13 15:37:41.445067 containerd[1745]: time="2025-02-13T15:37:41.443921907Z" level=info msg="StopPodSandbox for \"98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4\" returns successfully" Feb 13 15:37:41.446796 containerd[1745]: time="2025-02-13T15:37:41.446707909Z" level=info msg="TearDown network for sandbox \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" successfully" Feb 13 15:37:41.446796 containerd[1745]: time="2025-02-13T15:37:41.446744309Z" level=info msg="StopPodSandbox for \"d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702\" returns successfully" Feb 13 15:37:41.528443 kubelet[3354]: I0213 15:37:41.528399 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksgg6\" (UniqueName: \"kubernetes.io/projected/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-kube-api-access-ksgg6\") pod \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\" (UID: \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\") " Feb 13 15:37:41.529214 kubelet[3354]: I0213 15:37:41.528487 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-cilium-config-path\") pod \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\" (UID: \"df370ecf-b3e8-4ee5-9f47-ce946fe78ceb\") " Feb 13 15:37:41.530587 kubelet[3354]: I0213 15:37:41.530424 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df370ecf-b3e8-4ee5-9f47-ce946fe78ceb" (UID: "df370ecf-b3e8-4ee5-9f47-ce946fe78ceb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:37:41.531468 kubelet[3354]: I0213 15:37:41.531428 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-kube-api-access-ksgg6" (OuterVolumeSpecName: "kube-api-access-ksgg6") pod "df370ecf-b3e8-4ee5-9f47-ce946fe78ceb" (UID: "df370ecf-b3e8-4ee5-9f47-ce946fe78ceb"). InnerVolumeSpecName "kube-api-access-ksgg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:37:41.586148 systemd[1]: Removed slice kubepods-besteffort-poddf370ecf_b3e8_4ee5_9f47_ce946fe78ceb.slice - libcontainer container kubepods-besteffort-poddf370ecf_b3e8_4ee5_9f47_ce946fe78ceb.slice. Feb 13 15:37:41.628696 kubelet[3354]: I0213 15:37:41.628644 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-etc-cni-netd\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.628696 kubelet[3354]: I0213 15:37:41.628700 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-kernel\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628725 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-clustermesh-secrets\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628740 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-lib-modules\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628768 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vngq\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-kube-api-access-8vngq\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628782 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-cgroup\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628799 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-run\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629386 kubelet[3354]: I0213 15:37:41.628796 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.629581 kubelet[3354]: I0213 15:37:41.628839 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.629581 kubelet[3354]: I0213 15:37:41.628813 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-net\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629581 kubelet[3354]: I0213 15:37:41.628861 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.629581 kubelet[3354]: I0213 15:37:41.628883 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hostproc\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629581 kubelet[3354]: I0213 15:37:41.628910 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hubble-tls\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.628925 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cni-path\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.628939 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-xtables-lock\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.628958 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-config-path\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.628975 3354 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-bpf-maps\") pod \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\" (UID: \"c1bdbe86-7278-4117-9aaa-ed59ed5c356b\") " Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.629017 3354 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-etc-cni-netd\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.629027 3354 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-kernel\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.629687 kubelet[3354]: I0213 15:37:41.629036 3354 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-cilium-config-path\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.629824 kubelet[3354]: I0213 15:37:41.629044 3354 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksgg6\" (UniqueName: \"kubernetes.io/projected/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb-kube-api-access-ksgg6\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.629824 kubelet[3354]: I0213 15:37:41.629054 3354 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-host-proc-sys-net\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.629824 kubelet[3354]: I0213 15:37:41.629077 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.629824 kubelet[3354]: I0213 15:37:41.629093 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.630786 kubelet[3354]: I0213 15:37:41.630499 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.630786 kubelet[3354]: I0213 15:37:41.630627 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.630786 kubelet[3354]: I0213 15:37:41.630647 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.631209 kubelet[3354]: I0213 15:37:41.631094 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.631209 kubelet[3354]: I0213 15:37:41.631139 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:37:41.632475 kubelet[3354]: I0213 15:37:41.632376 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 15:37:41.633995 kubelet[3354]: I0213 15:37:41.633948 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:37:41.634460 kubelet[3354]: I0213 15:37:41.634436 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:37:41.635649 kubelet[3354]: I0213 15:37:41.635613 3354 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-kube-api-access-8vngq" (OuterVolumeSpecName: "kube-api-access-8vngq") pod "c1bdbe86-7278-4117-9aaa-ed59ed5c356b" (UID: "c1bdbe86-7278-4117-9aaa-ed59ed5c356b"). InnerVolumeSpecName "kube-api-access-8vngq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:37:41.730082 kubelet[3354]: I0213 15:37:41.730030 3354 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-clustermesh-secrets\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730082 kubelet[3354]: I0213 15:37:41.730076 3354 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-lib-modules\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730082 kubelet[3354]: I0213 15:37:41.730091 3354 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vngq\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-kube-api-access-8vngq\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730102 3354 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-cgroup\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730119 3354 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-run\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730130 3354 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hubble-tls\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730142 3354 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cni-path\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730150 3354 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-hostproc\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730162 3354 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-xtables-lock\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730177 3354 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-cilium-config-path\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:41.730350 kubelet[3354]: I0213 15:37:41.730188 3354 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1bdbe86-7278-4117-9aaa-ed59ed5c356b-bpf-maps\") on node \"ci-4230.0.1-a-cf53dd3440\" DevicePath \"\"" Feb 13 15:37:42.055427 kubelet[3354]: I0213 15:37:42.054974 3354 scope.go:117] "RemoveContainer" containerID="f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40" Feb 13 15:37:42.059549 containerd[1745]: time="2025-02-13T15:37:42.059074455Z" level=info msg="RemoveContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\"" Feb 13 15:37:42.078188 systemd[1]: Removed slice kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice - libcontainer container kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice. Feb 13 15:37:42.078321 systemd[1]: kubepods-burstable-podc1bdbe86_7278_4117_9aaa_ed59ed5c356b.slice: Consumed 7.128s CPU time, 125.2M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:37:42.080361 containerd[1745]: time="2025-02-13T15:37:42.080258666Z" level=info msg="RemoveContainer for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" returns successfully" Feb 13 15:37:42.080929 kubelet[3354]: I0213 15:37:42.080798 3354 scope.go:117] "RemoveContainer" containerID="f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40" Feb 13 15:37:42.081326 containerd[1745]: time="2025-02-13T15:37:42.081221826Z" level=error msg="ContainerStatus for \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\": not found" Feb 13 15:37:42.082588 kubelet[3354]: E0213 15:37:42.082403 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\": not found" containerID="f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40" Feb 13 15:37:42.082588 kubelet[3354]: I0213 15:37:42.082484 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40"} err="failed to get container status \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1d45dbfad3eeba28818d2c3835218d1caa2f69de6f75e54399327e183f50c40\": not found" Feb 13 15:37:42.082705 kubelet[3354]: I0213 15:37:42.082599 3354 scope.go:117] "RemoveContainer" containerID="38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd" Feb 13 15:37:42.084483 containerd[1745]: time="2025-02-13T15:37:42.084447508Z" level=info msg="RemoveContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\"" Feb 13 15:37:42.100732 containerd[1745]: time="2025-02-13T15:37:42.100365276Z" level=info msg="RemoveContainer for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" returns successfully" Feb 13 15:37:42.101147 kubelet[3354]: I0213 15:37:42.101036 3354 scope.go:117] "RemoveContainer" containerID="f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0" Feb 13 15:37:42.105729 containerd[1745]: time="2025-02-13T15:37:42.105381198Z" level=info msg="RemoveContainer for \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\"" Feb 13 15:37:42.116033 containerd[1745]: time="2025-02-13T15:37:42.115961164Z" level=info msg="RemoveContainer for \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\" returns successfully" Feb 13 15:37:42.116418 kubelet[3354]: I0213 15:37:42.116213 3354 scope.go:117] "RemoveContainer" containerID="83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f" Feb 13 15:37:42.119201 containerd[1745]: time="2025-02-13T15:37:42.118791925Z" level=info msg="RemoveContainer for \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\"" Feb 13 15:37:42.128418 containerd[1745]: time="2025-02-13T15:37:42.128349010Z" level=info msg="RemoveContainer for \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\" returns successfully" Feb 13 15:37:42.128904 kubelet[3354]: I0213 15:37:42.128872 3354 scope.go:117] "RemoveContainer" containerID="3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e" Feb 13 15:37:42.130169 containerd[1745]: time="2025-02-13T15:37:42.130118171Z" level=info msg="RemoveContainer for \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\"" Feb 13 15:37:42.143150 containerd[1745]: time="2025-02-13T15:37:42.143099457Z" level=info msg="RemoveContainer for \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" returns successfully" Feb 13 15:37:42.143810 kubelet[3354]: I0213 15:37:42.143414 3354 scope.go:117] "RemoveContainer" containerID="ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250" Feb 13 15:37:42.144710 containerd[1745]: time="2025-02-13T15:37:42.144675098Z" level=info msg="RemoveContainer for \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\"" Feb 13 15:37:42.154760 containerd[1745]: time="2025-02-13T15:37:42.154716743Z" level=info msg="RemoveContainer for \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\" returns successfully" Feb 13 15:37:42.155166 kubelet[3354]: I0213 15:37:42.155110 3354 scope.go:117] "RemoveContainer" containerID="38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd" Feb 13 15:37:42.155877 containerd[1745]: time="2025-02-13T15:37:42.155631263Z" level=error msg="ContainerStatus for \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\": not found" Feb 13 15:37:42.155952 kubelet[3354]: E0213 15:37:42.155774 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\": not found" containerID="38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd" Feb 13 15:37:42.155952 kubelet[3354]: I0213 15:37:42.155803 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd"} err="failed to get container status \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"38ad376ddaf531e9511df68897c6d28f38e7ca17fc27fbd99688d46f305876fd\": not found" Feb 13 15:37:42.155952 kubelet[3354]: I0213 15:37:42.155825 3354 scope.go:117] "RemoveContainer" containerID="f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0" Feb 13 15:37:42.156297 containerd[1745]: time="2025-02-13T15:37:42.156219104Z" level=error msg="ContainerStatus for \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\": not found" Feb 13 15:37:42.156626 kubelet[3354]: E0213 15:37:42.156448 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\": not found" containerID="f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0" Feb 13 15:37:42.156626 kubelet[3354]: I0213 15:37:42.156521 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0"} err="failed to get container status \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f83d1d20134b61b2b69f918a1fb38b58e23435d89d735ee51d0a359e1a488ea0\": not found" Feb 13 15:37:42.156626 kubelet[3354]: I0213 15:37:42.156557 3354 scope.go:117] "RemoveContainer" containerID="83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f" Feb 13 15:37:42.156913 containerd[1745]: time="2025-02-13T15:37:42.156857184Z" level=error msg="ContainerStatus for \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\": not found" Feb 13 15:37:42.157164 kubelet[3354]: E0213 15:37:42.156998 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\": not found" containerID="83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f" Feb 13 15:37:42.157164 kubelet[3354]: I0213 15:37:42.157029 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f"} err="failed to get container status \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\": rpc error: code = NotFound desc = an error occurred when try to find container \"83f9879719563f2c03265c48dfa7e64aed21845e0cfdd5caafb138445be9093f\": not found" Feb 13 15:37:42.157164 kubelet[3354]: I0213 15:37:42.157045 3354 scope.go:117] "RemoveContainer" containerID="3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e" Feb 13 15:37:42.159144 containerd[1745]: time="2025-02-13T15:37:42.157711984Z" level=error msg="ContainerStatus for \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\": not found" Feb 13 15:37:42.159242 kubelet[3354]: E0213 15:37:42.159001 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\": not found" containerID="3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e" Feb 13 15:37:42.159242 kubelet[3354]: I0213 15:37:42.159021 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e"} err="failed to get container status \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e864b7a2b791a45e72cb672652c11d5544b8982615f6000650f83af029b699e\": not found" Feb 13 15:37:42.159242 kubelet[3354]: I0213 15:37:42.159039 3354 scope.go:117] "RemoveContainer" containerID="ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250" Feb 13 15:37:42.160175 containerd[1745]: time="2025-02-13T15:37:42.160129506Z" level=error msg="ContainerStatus for \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\": not found" Feb 13 15:37:42.160401 kubelet[3354]: E0213 15:37:42.160336 3354 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\": not found" containerID="ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250" Feb 13 15:37:42.160401 kubelet[3354]: I0213 15:37:42.160385 3354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250"} err="failed to get container status \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\": rpc error: code = NotFound desc = an error occurred when try to find container \"ede7ad4c05fcd340ce9ef7fb254c96b4b16bd501f2f4389f322a755955722250\": not found" Feb 13 15:37:42.183329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98707ce6649a158fd0ada73553058c418161237397aaa07e80b551d6e85903b4-rootfs.mount: Deactivated successfully. Feb 13 15:37:42.183435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d545328419c3019615e9378678c40b4532414173d05cc10cc827647cc314a702-rootfs.mount: Deactivated successfully. Feb 13 15:37:42.183487 systemd[1]: var-lib-kubelet-pods-df370ecf\x2db3e8\x2d4ee5\x2d9f47\x2dce946fe78ceb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksgg6.mount: Deactivated successfully. Feb 13 15:37:42.183553 systemd[1]: var-lib-kubelet-pods-c1bdbe86\x2d7278\x2d4117\x2d9aaa\x2ded59ed5c356b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vngq.mount: Deactivated successfully. Feb 13 15:37:42.183604 systemd[1]: var-lib-kubelet-pods-c1bdbe86\x2d7278\x2d4117\x2d9aaa\x2ded59ed5c356b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:37:42.183660 systemd[1]: var-lib-kubelet-pods-c1bdbe86\x2d7278\x2d4117\x2d9aaa\x2ded59ed5c356b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:37:43.172316 sshd[4985]: Connection closed by 10.200.16.10 port 60908 Feb 13 15:37:43.172908 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:43.176738 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:60908.service: Deactivated successfully. Feb 13 15:37:43.178656 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:37:43.179405 systemd[1]: session-25.scope: Consumed 1.590s CPU time, 23.5M memory peak. Feb 13 15:37:43.181316 systemd-logind[1726]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:37:43.182895 systemd-logind[1726]: Removed session 25. Feb 13 15:37:43.250709 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:58616.service - OpenSSH per-connection server daemon (10.200.16.10:58616). Feb 13 15:37:43.579099 kubelet[3354]: I0213 15:37:43.579055 3354 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bdbe86-7278-4117-9aaa-ed59ed5c356b" path="/var/lib/kubelet/pods/c1bdbe86-7278-4117-9aaa-ed59ed5c356b/volumes" Feb 13 15:37:43.579651 kubelet[3354]: I0213 15:37:43.579626 3354 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df370ecf-b3e8-4ee5-9f47-ce946fe78ceb" path="/var/lib/kubelet/pods/df370ecf-b3e8-4ee5-9f47-ce946fe78ceb/volumes" Feb 13 15:37:43.678593 sshd[5150]: Accepted publickey for core from 10.200.16.10 port 58616 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:43.679938 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:43.685653 systemd-logind[1726]: New session 26 of user core. Feb 13 15:37:43.691668 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:37:43.692574 kubelet[3354]: E0213 15:37:43.692347 3354 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:37:44.856597 kubelet[3354]: I0213 15:37:44.856483 3354 memory_manager.go:355] "RemoveStaleState removing state" podUID="c1bdbe86-7278-4117-9aaa-ed59ed5c356b" containerName="cilium-agent" Feb 13 15:37:44.856597 kubelet[3354]: I0213 15:37:44.856525 3354 memory_manager.go:355] "RemoveStaleState removing state" podUID="df370ecf-b3e8-4ee5-9f47-ce946fe78ceb" containerName="cilium-operator" Feb 13 15:37:44.866167 systemd[1]: Created slice kubepods-burstable-pod8441dbf7_2cf8_4f25_99d1_b0bf303da3ed.slice - libcontainer container kubepods-burstable-pod8441dbf7_2cf8_4f25_99d1_b0bf303da3ed.slice. Feb 13 15:37:44.907464 sshd[5152]: Connection closed by 10.200.16.10 port 58616 Feb 13 15:37:44.908060 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:44.916165 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:58616.service: Deactivated successfully. Feb 13 15:37:44.917935 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:37:44.921595 systemd-logind[1726]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:37:44.924693 systemd-logind[1726]: Removed session 26. Feb 13 15:37:44.947406 kubelet[3354]: I0213 15:37:44.947360 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-clustermesh-secrets\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947406 kubelet[3354]: I0213 15:37:44.947405 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-cilium-config-path\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947425 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmz4m\" (UniqueName: \"kubernetes.io/projected/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-kube-api-access-jmz4m\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947445 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-cilium-run\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947460 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-bpf-maps\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947477 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-cilium-cgroup\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947491 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-cni-path\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947574 kubelet[3354]: I0213 15:37:44.947509 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-lib-modules\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947525 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-xtables-lock\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947540 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-host-proc-sys-kernel\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947557 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-hostproc\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947574 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-cilium-ipsec-secrets\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947589 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-etc-cni-netd\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947824 kubelet[3354]: I0213 15:37:44.947603 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-host-proc-sys-net\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:44.947954 kubelet[3354]: I0213 15:37:44.947617 3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8441dbf7-2cf8-4f25-99d1-b0bf303da3ed-hubble-tls\") pod \"cilium-rj6x6\" (UID: \"8441dbf7-2cf8-4f25-99d1-b0bf303da3ed\") " pod="kube-system/cilium-rj6x6" Feb 13 15:37:45.000053 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:58622.service - OpenSSH per-connection server daemon (10.200.16.10:58622). Feb 13 15:37:45.171367 containerd[1745]: time="2025-02-13T15:37:45.171143548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj6x6,Uid:8441dbf7-2cf8-4f25-99d1-b0bf303da3ed,Namespace:kube-system,Attempt:0,}" Feb 13 15:37:45.217206 containerd[1745]: time="2025-02-13T15:37:45.216959291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:37:45.217206 containerd[1745]: time="2025-02-13T15:37:45.217025411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:37:45.217206 containerd[1745]: time="2025-02-13T15:37:45.217041571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.217206 containerd[1745]: time="2025-02-13T15:37:45.217134531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:37:45.234510 systemd[1]: Started cri-containerd-dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca.scope - libcontainer container dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca. Feb 13 15:37:45.258678 containerd[1745]: time="2025-02-13T15:37:45.258390792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj6x6,Uid:8441dbf7-2cf8-4f25-99d1-b0bf303da3ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\"" Feb 13 15:37:45.264119 containerd[1745]: time="2025-02-13T15:37:45.263815075Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:37:45.301815 containerd[1745]: time="2025-02-13T15:37:45.301624894Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172\"" Feb 13 15:37:45.304547 containerd[1745]: time="2025-02-13T15:37:45.303310775Z" level=info msg="StartContainer for \"b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172\"" Feb 13 15:37:45.334582 systemd[1]: Started cri-containerd-b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172.scope - libcontainer container b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172. Feb 13 15:37:45.365464 containerd[1745]: time="2025-02-13T15:37:45.365401206Z" level=info msg="StartContainer for \"b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172\" returns successfully" Feb 13 15:37:45.368012 systemd[1]: cri-containerd-b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172.scope: Deactivated successfully. Feb 13 15:37:45.449223 containerd[1745]: time="2025-02-13T15:37:45.449049169Z" level=info msg="shim disconnected" id=b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172 namespace=k8s.io Feb 13 15:37:45.449223 containerd[1745]: time="2025-02-13T15:37:45.449111809Z" level=warning msg="cleaning up after shim disconnected" id=b988131c834b3fde04873e3531fddd2c577f88cf7d1c8ca005f57dc9f1768172 namespace=k8s.io Feb 13 15:37:45.449223 containerd[1745]: time="2025-02-13T15:37:45.449121089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:45.454358 sshd[5163]: Accepted publickey for core from 10.200.16.10 port 58622 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:45.455997 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:45.462554 systemd-logind[1726]: New session 27 of user core. Feb 13 15:37:45.466639 containerd[1745]: time="2025-02-13T15:37:45.466568978Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:37:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:37:45.467983 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:37:45.778254 sshd[5273]: Connection closed by 10.200.16.10 port 58622 Feb 13 15:37:45.777254 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:45.780821 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:58622.service: Deactivated successfully. Feb 13 15:37:45.783021 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:37:45.783827 systemd-logind[1726]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:37:45.785338 systemd-logind[1726]: Removed session 27. Feb 13 15:37:45.863592 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:58632.service - OpenSSH per-connection server daemon (10.200.16.10:58632). Feb 13 15:37:46.087256 containerd[1745]: time="2025-02-13T15:37:46.087006733Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:37:46.125952 containerd[1745]: time="2025-02-13T15:37:46.125892592Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee\"" Feb 13 15:37:46.127066 containerd[1745]: time="2025-02-13T15:37:46.126775113Z" level=info msg="StartContainer for \"80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee\"" Feb 13 15:37:46.162538 systemd[1]: Started cri-containerd-80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee.scope - libcontainer container 80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee. Feb 13 15:37:46.194795 containerd[1745]: time="2025-02-13T15:37:46.193808667Z" level=info msg="StartContainer for \"80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee\" returns successfully" Feb 13 15:37:46.199447 systemd[1]: cri-containerd-80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee.scope: Deactivated successfully. Feb 13 15:37:46.239863 containerd[1745]: time="2025-02-13T15:37:46.239777050Z" level=info msg="shim disconnected" id=80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee namespace=k8s.io Feb 13 15:37:46.239863 containerd[1745]: time="2025-02-13T15:37:46.239856410Z" level=warning msg="cleaning up after shim disconnected" id=80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee namespace=k8s.io Feb 13 15:37:46.239863 containerd[1745]: time="2025-02-13T15:37:46.239865250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:46.317003 sshd[5280]: Accepted publickey for core from 10.200.16.10 port 58632 ssh2: RSA SHA256:Wp8LEbVsp4FLPrgcSccp2SnTFPguFd3499v3ptqMYmI Feb 13 15:37:46.318828 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:46.323791 systemd-logind[1726]: New session 28 of user core. Feb 13 15:37:46.332526 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:37:47.052088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80a9c41746d7f790c847484a7e09b66cc243591dfd1f6ac2e3dcc7411e904bee-rootfs.mount: Deactivated successfully. Feb 13 15:37:47.090334 containerd[1745]: time="2025-02-13T15:37:47.090261002Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:37:47.135846 containerd[1745]: time="2025-02-13T15:37:47.135710385Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f\"" Feb 13 15:37:47.136811 containerd[1745]: time="2025-02-13T15:37:47.136401105Z" level=info msg="StartContainer for \"2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f\"" Feb 13 15:37:47.174529 systemd[1]: Started cri-containerd-2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f.scope - libcontainer container 2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f. Feb 13 15:37:47.205467 systemd[1]: cri-containerd-2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f.scope: Deactivated successfully. Feb 13 15:37:47.208548 containerd[1745]: time="2025-02-13T15:37:47.207630141Z" level=info msg="StartContainer for \"2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f\" returns successfully" Feb 13 15:37:47.253834 containerd[1745]: time="2025-02-13T15:37:47.253669365Z" level=info msg="shim disconnected" id=2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f namespace=k8s.io Feb 13 15:37:47.253834 containerd[1745]: time="2025-02-13T15:37:47.253734365Z" level=warning msg="cleaning up after shim disconnected" id=2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f namespace=k8s.io Feb 13 15:37:47.253834 containerd[1745]: time="2025-02-13T15:37:47.253743965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:48.052259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2922054f0f7df25b058c4630baffa39b7d8eb20a6b6eb4fe1c3c880c9f6c1b3f-rootfs.mount: Deactivated successfully. Feb 13 15:37:48.096915 containerd[1745]: time="2025-02-13T15:37:48.096869992Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:37:48.151408 containerd[1745]: time="2025-02-13T15:37:48.151356100Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f\"" Feb 13 15:37:48.152667 containerd[1745]: time="2025-02-13T15:37:48.152044340Z" level=info msg="StartContainer for \"e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f\"" Feb 13 15:37:48.186479 systemd[1]: Started cri-containerd-e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f.scope - libcontainer container e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f. Feb 13 15:37:48.213866 systemd[1]: cri-containerd-e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f.scope: Deactivated successfully. Feb 13 15:37:48.220454 containerd[1745]: time="2025-02-13T15:37:48.220376535Z" level=info msg="StartContainer for \"e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f\" returns successfully" Feb 13 15:37:48.252663 containerd[1745]: time="2025-02-13T15:37:48.252553031Z" level=info msg="shim disconnected" id=e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f namespace=k8s.io Feb 13 15:37:48.252663 containerd[1745]: time="2025-02-13T15:37:48.252633472Z" level=warning msg="cleaning up after shim disconnected" id=e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f namespace=k8s.io Feb 13 15:37:48.252663 containerd[1745]: time="2025-02-13T15:37:48.252643392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:37:48.472910 kubelet[3354]: I0213 15:37:48.472773 3354 setters.go:602] "Node became not ready" node="ci-4230.0.1-a-cf53dd3440" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:37:48Z","lastTransitionTime":"2025-02-13T15:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:37:48.693120 kubelet[3354]: E0213 15:37:48.693064 3354 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:37:49.052134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e72d5f199ed0de42249c321e9c55ec9e1de6f06d9a94d204f8a10e38ddec259f-rootfs.mount: Deactivated successfully. Feb 13 15:37:49.103299 containerd[1745]: time="2025-02-13T15:37:49.102759863Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:37:49.151602 containerd[1745]: time="2025-02-13T15:37:49.151549448Z" level=info msg="CreateContainer within sandbox \"dcc5df58da13a8e180e3321bed78daeb83881efd8cb0b8bd7dcecbb1c4752eca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5\"" Feb 13 15:37:49.153223 containerd[1745]: time="2025-02-13T15:37:49.152252168Z" level=info msg="StartContainer for \"36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5\"" Feb 13 15:37:49.185628 systemd[1]: Started cri-containerd-36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5.scope - libcontainer container 36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5. Feb 13 15:37:49.218760 containerd[1745]: time="2025-02-13T15:37:49.218700802Z" level=info msg="StartContainer for \"36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5\" returns successfully" Feb 13 15:37:49.808312 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:37:50.052461 systemd[1]: run-containerd-runc-k8s.io-36565c8427b6295d446ec4586fb745f2ffca7980b159e8ad3e6930d9dbd79cf5-runc.U1Viu2.mount: Deactivated successfully. Feb 13 15:37:50.121326 kubelet[3354]: I0213 15:37:50.120799 3354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rj6x6" podStartSLOduration=6.12077994 podStartE2EDuration="6.12077994s" podCreationTimestamp="2025-02-13 15:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:50.120748899 +0000 UTC m=+226.674007836" watchObservedRunningTime="2025-02-13 15:37:50.12077994 +0000 UTC m=+226.674038877" Feb 13 15:37:52.601899 systemd-networkd[1348]: lxc_health: Link UP Feb 13 15:37:52.615055 systemd-networkd[1348]: lxc_health: Gained carrier Feb 13 15:37:53.797503 systemd-networkd[1348]: lxc_health: Gained IPv6LL Feb 13 15:37:59.466237 sshd[5341]: Connection closed by 10.200.16.10 port 58632 Feb 13 15:37:59.466940 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:59.470162 systemd-logind[1726]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:37:59.471950 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:58632.service: Deactivated successfully. Feb 13 15:37:59.474153 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:37:59.475760 systemd-logind[1726]: Removed session 28.