Jan 17 00:05:39.189712 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:05:39.189733 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:05:39.189742 kernel: KASLR enabled Jan 17 00:05:39.189748 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:05:39.189755 kernel: printk: bootconsole [pl11] enabled Jan 17 00:05:39.189761 kernel: efi: EFI v2.7 by EDK II Jan 17 00:05:39.189768 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:05:39.189774 kernel: random: crng init done Jan 17 00:05:39.189781 kernel: ACPI: Early table checksum verification disabled Jan 17 00:05:39.189787 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:05:39.189793 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189799 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189806 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:05:39.189812 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189820 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189826 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189832 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189840 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189847 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189853 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:05:39.189860 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189866 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:05:39.189872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:05:39.189879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:05:39.189885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:05:39.189891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:05:39.189898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:05:39.189904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:05:39.189912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:05:39.189918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:05:39.189925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:05:39.189931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:05:39.189937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:05:39.189944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:05:39.189950 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:05:39.189956 kernel: Zone ranges: Jan 17 00:05:39.189963 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:05:39.189969 kernel: DMA32 empty Jan 17 00:05:39.189975 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:05:39.189982 kernel: Movable zone start for each node Jan 17 00:05:39.189992 kernel: Early memory node ranges Jan 17 00:05:39.189999 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:05:39.190006 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:05:39.190012 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:05:39.190019 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:05:39.190027 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:05:39.190034 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:05:39.190041 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:05:39.190047 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:05:39.190054 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:05:39.190061 kernel: psci: probing for conduit method from ACPI. Jan 17 00:05:39.190068 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:05:39.190074 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:05:39.190081 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:05:39.190088 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:05:39.190095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:05:39.190101 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:05:39.190110 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:05:39.190117 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:05:39.190123 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:05:39.190130 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:05:39.190137 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:05:39.190144 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:05:39.190150 kernel: CPU features: detected: Spectre-BHB Jan 17 00:05:39.190157 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:05:39.190164 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:05:39.190171 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:05:39.190177 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:05:39.190186 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:05:39.190192 kernel: alternatives: applying boot alternatives Jan 17 00:05:39.190200 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:05:39.190208 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:05:39.190214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:05:39.190221 kernel: Fallback order for Node 0: 0 Jan 17 00:05:39.191300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:05:39.191312 kernel: Policy zone: Normal Jan 17 00:05:39.191319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:05:39.191326 kernel: software IO TLB: area num 2. Jan 17 00:05:39.191333 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:05:39.191345 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:05:39.191352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:05:39.191358 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:05:39.191366 kernel: rcu: RCU event tracing is enabled. Jan 17 00:05:39.191373 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:05:39.191380 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:05:39.191387 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:05:39.191394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:05:39.191401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:05:39.191407 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:05:39.191414 kernel: GICv3: 960 SPIs implemented Jan 17 00:05:39.191422 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:05:39.191429 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:05:39.191436 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:05:39.191443 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:05:39.191450 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:05:39.191457 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:05:39.191464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:05:39.191470 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:05:39.191477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:05:39.191484 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:05:39.191492 kernel: Console: colour dummy device 80x25 Jan 17 00:05:39.191500 kernel: printk: console [tty1] enabled Jan 17 00:05:39.191507 kernel: ACPI: Core revision 20230628 Jan 17 00:05:39.191515 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:05:39.191522 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:05:39.191529 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:05:39.191536 kernel: landlock: Up and running. Jan 17 00:05:39.191542 kernel: SELinux: Initializing. Jan 17 00:05:39.191549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.191556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.191565 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:05:39.191572 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:05:39.191580 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:05:39.191587 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:05:39.191594 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:05:39.191601 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:05:39.191608 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:05:39.191615 kernel: Remapping and enabling EFI services. Jan 17 00:05:39.191628 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:05:39.191636 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:05:39.191643 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:05:39.191651 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:05:39.191659 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:05:39.191666 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:05:39.191674 kernel: SMP: Total of 2 processors activated. Jan 17 00:05:39.191682 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:05:39.191689 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:05:39.191698 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:05:39.191705 kernel: CPU features: detected: CRC32 instructions Jan 17 00:05:39.191713 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:05:39.191720 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:05:39.191727 kernel: CPU features: detected: Privileged Access Never Jan 17 00:05:39.191734 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:05:39.191742 kernel: alternatives: applying system-wide alternatives Jan 17 00:05:39.191749 kernel: devtmpfs: initialized Jan 17 00:05:39.191756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:05:39.191765 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:05:39.191773 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:05:39.191780 kernel: SMBIOS 3.1.0 present. Jan 17 00:05:39.191787 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:05:39.191795 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:05:39.191802 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:05:39.191810 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:05:39.191817 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:05:39.191825 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:05:39.191834 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:05:39.191841 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:05:39.191849 kernel: cpuidle: using governor menu Jan 17 00:05:39.191856 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:05:39.191863 kernel: ASID allocator initialised with 32768 entries Jan 17 00:05:39.191871 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:05:39.191878 kernel: Serial: AMBA PL011 UART driver Jan 17 00:05:39.191886 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:05:39.191893 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:05:39.191902 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:05:39.191909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:05:39.191916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:05:39.191924 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:05:39.191931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:05:39.191939 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:05:39.191946 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:05:39.191954 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:05:39.191961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:05:39.191970 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:05:39.191977 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:05:39.191985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:05:39.191992 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:05:39.191999 kernel: ACPI: Interpreter enabled Jan 17 00:05:39.192006 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:05:39.192014 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:05:39.192021 kernel: printk: console [ttyAMA0] enabled Jan 17 00:05:39.192029 kernel: printk: bootconsole [pl11] disabled Jan 17 00:05:39.192038 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:05:39.192045 kernel: iommu: Default domain type: Translated Jan 17 00:05:39.192053 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:05:39.192060 kernel: efivars: Registered efivars operations Jan 17 00:05:39.192067 kernel: vgaarb: loaded Jan 17 00:05:39.192075 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:05:39.192082 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:05:39.192090 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:05:39.192097 kernel: pnp: PnP ACPI init Jan 17 00:05:39.192106 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:05:39.192114 kernel: NET: Registered PF_INET protocol family Jan 17 00:05:39.192121 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:05:39.192129 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:05:39.192136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:05:39.192144 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:05:39.192151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:05:39.192159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:05:39.192166 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.192174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.192182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:05:39.192189 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:05:39.192196 kernel: kvm [1]: HYP mode not available Jan 17 00:05:39.192204 kernel: Initialise system trusted keyrings Jan 17 00:05:39.192211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:05:39.192219 kernel: Key type asymmetric registered Jan 17 00:05:39.192232 kernel: Asymmetric key parser 'x509' registered Jan 17 00:05:39.192241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:05:39.192250 kernel: io scheduler mq-deadline registered Jan 17 00:05:39.192257 kernel: io scheduler kyber registered Jan 17 00:05:39.192265 kernel: io scheduler bfq registered Jan 17 00:05:39.192272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:05:39.192279 kernel: thunder_xcv, ver 1.0 Jan 17 00:05:39.192287 kernel: thunder_bgx, ver 1.0 Jan 17 00:05:39.192294 kernel: nicpf, ver 1.0 Jan 17 00:05:39.192302 kernel: nicvf, ver 1.0 Jan 17 00:05:39.192428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:05:39.192501 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:05:38 UTC (1768608338) Jan 17 00:05:39.192512 kernel: efifb: probing for efifb Jan 17 00:05:39.192519 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:05:39.192527 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:05:39.192534 kernel: efifb: scrolling: redraw Jan 17 00:05:39.192541 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:05:39.192549 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:05:39.192556 kernel: fb0: EFI VGA frame buffer device Jan 17 00:05:39.192565 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:05:39.192572 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:05:39.192580 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:05:39.192587 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:05:39.192594 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:05:39.192602 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:05:39.192609 kernel: Segment Routing with IPv6 Jan 17 00:05:39.192616 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:05:39.192624 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:05:39.192632 kernel: Key type dns_resolver registered Jan 17 00:05:39.192640 kernel: registered taskstats version 1 Jan 17 00:05:39.192647 kernel: Loading compiled-in X.509 certificates Jan 17 00:05:39.192654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:05:39.192661 kernel: Key type .fscrypt registered Jan 17 00:05:39.192669 kernel: Key type fscrypt-provisioning registered Jan 17 00:05:39.192676 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:05:39.192683 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:05:39.192691 kernel: ima: No architecture policies found Jan 17 00:05:39.192699 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:05:39.192707 kernel: clk: Disabling unused clocks Jan 17 00:05:39.192714 kernel: Freeing unused kernel memory: 39424K Jan 17 00:05:39.192721 kernel: Run /init as init process Jan 17 00:05:39.192729 kernel: with arguments: Jan 17 00:05:39.192736 kernel: /init Jan 17 00:05:39.192743 kernel: with environment: Jan 17 00:05:39.192750 kernel: HOME=/ Jan 17 00:05:39.192758 kernel: TERM=linux Jan 17 00:05:39.192767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:05:39.192778 systemd[1]: Detected virtualization microsoft. Jan 17 00:05:39.192786 systemd[1]: Detected architecture arm64. Jan 17 00:05:39.192794 systemd[1]: Running in initrd. Jan 17 00:05:39.192801 systemd[1]: No hostname configured, using default hostname. Jan 17 00:05:39.192809 systemd[1]: Hostname set to . Jan 17 00:05:39.192817 systemd[1]: Initializing machine ID from random generator. Jan 17 00:05:39.192827 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:05:39.192835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:39.192843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:39.192851 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:05:39.192859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:05:39.192867 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:05:39.192876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:05:39.192885 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:05:39.192895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:05:39.192903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:39.192912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:39.192919 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:05:39.192927 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:05:39.192935 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:05:39.192943 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:05:39.192951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:39.192960 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:39.192968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:05:39.192976 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:05:39.192984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:39.192993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:39.193001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:39.193008 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:05:39.193016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:05:39.193026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:05:39.193034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:05:39.193042 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:05:39.193050 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:05:39.193074 systemd-journald[218]: Collecting audit messages is disabled. Jan 17 00:05:39.193096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:05:39.193104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:39.193113 systemd-journald[218]: Journal started Jan 17 00:05:39.193131 systemd-journald[218]: Runtime Journal (/run/log/journal/4c17d89951fd4ac5b4837b8b0a6352b4) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:05:39.199397 systemd-modules-load[219]: Inserted module 'overlay' Jan 17 00:05:39.221800 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:05:39.222566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:39.243516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:05:39.243538 kernel: Bridge firewalling registered Jan 17 00:05:39.239528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:39.242794 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 17 00:05:39.249074 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:05:39.257491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:39.266055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:39.285507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:39.297314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:05:39.304406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:05:39.323459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:05:39.329082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:39.333817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:05:39.349248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:39.361006 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:39.379589 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:05:39.385378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:05:39.400777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:05:39.420281 dracut-cmdline[249]: dracut-dracut-053 Jan 17 00:05:39.429683 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:05:39.437042 systemd-resolved[251]: Positive Trust Anchors: Jan 17 00:05:39.437051 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:05:39.437082 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:05:39.439182 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 17 00:05:39.450400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:05:39.459843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:39.469755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:39.577285 kernel: SCSI subsystem initialized Jan 17 00:05:39.584239 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:05:39.594235 kernel: iscsi: registered transport (tcp) Jan 17 00:05:39.612034 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:05:39.612089 kernel: QLogic iSCSI HBA Driver Jan 17 00:05:39.645481 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:39.660741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:05:39.689929 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:05:39.689988 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:05:39.695210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:05:39.743248 kernel: raid6: neonx8 gen() 15801 MB/s Jan 17 00:05:39.762240 kernel: raid6: neonx4 gen() 15691 MB/s Jan 17 00:05:39.781232 kernel: raid6: neonx2 gen() 13258 MB/s Jan 17 00:05:39.802237 kernel: raid6: neonx1 gen() 10549 MB/s Jan 17 00:05:39.821231 kernel: raid6: int64x8 gen() 6979 MB/s Jan 17 00:05:39.840235 kernel: raid6: int64x4 gen() 7374 MB/s Jan 17 00:05:39.860236 kernel: raid6: int64x2 gen() 6146 MB/s Jan 17 00:05:39.882063 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:05:39.882073 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Jan 17 00:05:39.905054 kernel: raid6: .... xor() 11969 MB/s, rmw enabled Jan 17 00:05:39.905098 kernel: raid6: using neon recovery algorithm Jan 17 00:05:39.913234 kernel: xor: measuring software checksum speed Jan 17 00:05:39.913247 kernel: 8regs : 19035 MB/sec Jan 17 00:05:39.918659 kernel: 32regs : 19487 MB/sec Jan 17 00:05:39.921562 kernel: arm64_neon : 27043 MB/sec Jan 17 00:05:39.925273 kernel: xor: using function: arm64_neon (27043 MB/sec) Jan 17 00:05:39.974235 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:05:39.983993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:39.996394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:40.015233 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 17 00:05:40.020346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:40.044463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:05:40.060386 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jan 17 00:05:40.087389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:40.108479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:05:40.148825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:40.168412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:05:40.192031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:40.202478 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:40.214390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:40.225456 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:05:40.243241 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:05:40.245430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:05:40.267516 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:05:40.267541 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:05:40.282836 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:05:40.282889 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:05:40.293704 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:05:40.293728 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:05:40.277506 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:40.299060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:40.299241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:40.323933 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:40.352496 kernel: PTP clock support registered Jan 17 00:05:40.352521 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:05:40.352532 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:05:40.352542 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:05:40.343953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:40.375151 kernel: scsi host0: storvsc_host_t Jan 17 00:05:40.375360 kernel: scsi host1: storvsc_host_t Jan 17 00:05:40.375451 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:05:40.345139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:40.373397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:40.409802 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:05:40.409971 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:05:40.409982 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:05:40.415871 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:05:40.415917 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:05:40.418609 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:05:40.419089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:40.449675 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 17 00:05:40.477767 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:05:40.477927 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: VF slot 1 added Jan 17 00:05:40.477846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:40.498529 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:05:40.498709 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:05:40.498803 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:05:40.500148 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:05:40.501358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:40.512167 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:05:40.512314 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:05:40.522139 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:05:40.527345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:40.527385 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:05:40.533148 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:05:40.533191 kernel: hv_pci 5a9dc8c6-d07a-4907-bee0-9910873d7f06: PCI VMBus probing: Using version 0x10004 Jan 17 00:05:40.552897 kernel: hv_pci 5a9dc8c6-d07a-4907-bee0-9910873d7f06: PCI host bridge to bus d07a:00 Jan 17 00:05:40.553128 kernel: pci_bus d07a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:05:40.557998 kernel: pci_bus d07a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:05:40.565188 kernel: pci d07a:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:05:40.574413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:40.596488 kernel: pci d07a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:05:40.596534 kernel: pci d07a:00:02.0: enabling Extended Tags Jan 17 00:05:40.596549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:40.596712 kernel: pci d07a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d07a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:05:40.607165 kernel: pci_bus d07a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:05:40.616590 kernel: pci d07a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:05:40.638153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:40.667539 kernel: mlx5_core d07a:00:02.0: enabling device (0000 -> 0002) Jan 17 00:05:40.674148 kernel: mlx5_core d07a:00:02.0: firmware version: 16.30.5026 Jan 17 00:05:40.873954 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: VF registering: eth1 Jan 17 00:05:40.874174 kernel: mlx5_core d07a:00:02.0 eth1: joined to eth0 Jan 17 00:05:40.879599 kernel: mlx5_core d07a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:05:40.890154 kernel: mlx5_core d07a:00:02.0 enP53370s1: renamed from eth1 Jan 17 00:05:41.164555 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:05:41.178575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:05:41.206624 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Jan 17 00:05:41.219276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:05:41.225030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:05:41.250153 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (493) Jan 17 00:05:41.252380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:05:41.276245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:05:41.288234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:42.279885 disk-uuid[607]: The operation has completed successfully. Jan 17 00:05:42.284809 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:42.342015 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:05:42.342108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:05:42.383228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:05:42.394054 sh[720]: Success Jan 17 00:05:42.427182 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:05:42.690169 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:05:42.708231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:05:42.715803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:05:42.746860 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:05:42.746925 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:42.752501 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:05:42.756903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:05:42.761542 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:05:43.104949 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:05:43.109096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:05:43.130378 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:05:43.137293 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:05:43.170613 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:43.170659 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:43.173854 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:43.224178 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:43.239039 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:05:43.243080 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:43.244684 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:43.260315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:05:43.267090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:05:43.284273 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:05:43.304728 systemd-networkd[902]: lo: Link UP Jan 17 00:05:43.304736 systemd-networkd[902]: lo: Gained carrier Jan 17 00:05:43.306256 systemd-networkd[902]: Enumeration completed Jan 17 00:05:43.306423 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:05:43.314336 systemd[1]: Reached target network.target - Network. Jan 17 00:05:43.317255 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:43.317258 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:05:43.392246 kernel: mlx5_core d07a:00:02.0 enP53370s1: Link up Jan 17 00:05:43.431368 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: Data path switched to VF: enP53370s1 Jan 17 00:05:43.431038 systemd-networkd[902]: enP53370s1: Link UP Jan 17 00:05:43.431138 systemd-networkd[902]: eth0: Link UP Jan 17 00:05:43.431247 systemd-networkd[902]: eth0: Gained carrier Jan 17 00:05:43.431256 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:43.450441 systemd-networkd[902]: enP53370s1: Gained carrier Jan 17 00:05:43.465155 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:05:44.389653 ignition[904]: Ignition 2.19.0 Jan 17 00:05:44.389665 ignition[904]: Stage: fetch-offline Jan 17 00:05:44.393107 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:44.389698 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.389706 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.389811 ignition[904]: parsed url from cmdline: "" Jan 17 00:05:44.389817 ignition[904]: no config URL provided Jan 17 00:05:44.389822 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:05:44.416464 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:05:44.389828 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:05:44.389833 ignition[904]: failed to fetch config: resource requires networking Jan 17 00:05:44.390033 ignition[904]: Ignition finished successfully Jan 17 00:05:44.434272 ignition[917]: Ignition 2.19.0 Jan 17 00:05:44.434280 ignition[917]: Stage: fetch Jan 17 00:05:44.434498 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.434512 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.434615 ignition[917]: parsed url from cmdline: "" Jan 17 00:05:44.434618 ignition[917]: no config URL provided Jan 17 00:05:44.434622 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:05:44.434630 ignition[917]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:05:44.434654 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:05:44.562270 ignition[917]: GET result: OK Jan 17 00:05:44.562335 ignition[917]: config has been read from IMDS userdata Jan 17 00:05:44.562374 ignition[917]: parsing config with SHA512: 3a7d8d8bf2cbe34bac779aa2912689b3afcd538c14a6defc198ae6fba91359cddb5ceffae4ec5c43ef1ed9418c9ba37642497ca45abdca33ac09d4bb2abde534 Jan 17 00:05:44.566705 unknown[917]: fetched base config from "system" Jan 17 00:05:44.567186 ignition[917]: fetch: fetch complete Jan 17 00:05:44.566711 unknown[917]: fetched base config from "system" Jan 17 00:05:44.567190 ignition[917]: fetch: fetch passed Jan 17 00:05:44.566721 unknown[917]: fetched user config from "azure" Jan 17 00:05:44.567237 ignition[917]: Ignition finished successfully Jan 17 00:05:44.570281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:05:44.592286 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:05:44.609426 ignition[924]: Ignition 2.19.0 Jan 17 00:05:44.609440 ignition[924]: Stage: kargs Jan 17 00:05:44.609614 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.615795 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:05:44.609625 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.610618 ignition[924]: kargs: kargs passed Jan 17 00:05:44.610660 ignition[924]: Ignition finished successfully Jan 17 00:05:44.635250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:05:44.648221 systemd-networkd[902]: eth0: Gained IPv6LL Jan 17 00:05:44.653817 ignition[930]: Ignition 2.19.0 Jan 17 00:05:44.653827 ignition[930]: Stage: disks Jan 17 00:05:44.657712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:05:44.653992 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.664245 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:44.654002 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.673541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:05:44.654909 ignition[930]: disks: disks passed Jan 17 00:05:44.683098 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:05:44.654949 ignition[930]: Ignition finished successfully Jan 17 00:05:44.692524 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:05:44.702245 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:05:44.723351 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:05:44.805019 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:05:44.813627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:05:44.828310 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:05:44.882172 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:05:44.883468 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:05:44.887186 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:05:44.937277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:05:44.957144 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Jan 17 00:05:44.958100 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:05:44.979919 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:44.979943 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:44.979953 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:44.971290 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:05:44.985391 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:05:44.985428 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:45.012092 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:05:45.024047 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:45.027257 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:05:45.041334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:05:45.608026 coreos-metadata[951]: Jan 17 00:05:45.607 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:05:45.617034 coreos-metadata[951]: Jan 17 00:05:45.617 INFO Fetch successful Jan 17 00:05:45.617034 coreos-metadata[951]: Jan 17 00:05:45.617 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:05:45.630581 coreos-metadata[951]: Jan 17 00:05:45.628 INFO Fetch successful Jan 17 00:05:45.645200 coreos-metadata[951]: Jan 17 00:05:45.645 INFO wrote hostname ci-4081.3.6-n-93f9562822 to /sysroot/etc/hostname Jan 17 00:05:45.653068 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:45.885169 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:05:45.923807 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:05:45.945250 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:05:45.953360 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:05:47.109605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:47.120292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:05:47.132257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:05:47.146388 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:47.143001 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:05:47.170991 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:05:47.180716 ignition[1066]: INFO : Ignition 2.19.0 Jan 17 00:05:47.180716 ignition[1066]: INFO : Stage: mount Jan 17 00:05:47.180716 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:47.180716 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:47.180716 ignition[1066]: INFO : mount: mount passed Jan 17 00:05:47.180716 ignition[1066]: INFO : Ignition finished successfully Jan 17 00:05:47.183796 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:05:47.211206 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:05:47.223810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:05:47.258135 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Jan 17 00:05:47.264131 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:47.264163 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:47.272468 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:47.279130 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:47.280774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:05:47.308017 ignition[1095]: INFO : Ignition 2.19.0 Jan 17 00:05:47.308017 ignition[1095]: INFO : Stage: files Jan 17 00:05:47.314411 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:47.314411 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:47.314411 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:05:47.314411 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:05:47.314411 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:05:47.363160 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:05:47.369197 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:05:47.369197 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:05:47.363518 unknown[1095]: wrote ssh authorized keys file for user: core Jan 17 00:05:47.388851 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:47.440444 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:47.774746 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:05:48.328031 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:05:48.554926 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:48.554926 ignition[1095]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:05:48.580994 ignition[1095]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: files passed Jan 17 00:05:48.591974 ignition[1095]: INFO : Ignition finished successfully Jan 17 00:05:48.596151 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:05:48.626376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:05:48.641273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:05:48.721738 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.721738 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.652312 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:05:48.746160 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.652409 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:05:48.700663 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:48.706382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:05:48.729415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:05:48.786295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:05:48.786409 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:05:48.796093 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:05:48.805498 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:05:48.814021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:05:48.816264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:05:48.847395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:48.867340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:05:48.881222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:48.886482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:48.896064 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:05:48.904763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:05:48.904881 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:48.917379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:05:48.921778 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:05:48.930850 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:05:48.939545 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:48.948213 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:48.957384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:05:48.966628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:48.976462 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:05:48.985201 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:05:48.994673 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:05:49.002209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:05:49.002330 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:49.013992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:49.018787 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:49.028136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:05:49.028207 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:49.038375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:05:49.038494 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:49.052755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:05:49.052907 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:49.063268 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:05:49.063361 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:05:49.073343 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:05:49.073435 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:49.102387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:05:49.118316 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:05:49.127212 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:05:49.157827 ignition[1147]: INFO : Ignition 2.19.0 Jan 17 00:05:49.157827 ignition[1147]: INFO : Stage: umount Jan 17 00:05:49.157827 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:49.157827 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:49.157827 ignition[1147]: INFO : umount: umount passed Jan 17 00:05:49.157827 ignition[1147]: INFO : Ignition finished successfully Jan 17 00:05:49.127354 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:49.138191 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:05:49.138294 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:49.153827 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:05:49.153928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:05:49.165972 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:05:49.166286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:05:49.173362 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:05:49.173414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:05:49.189578 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:05:49.189630 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:05:49.197537 systemd[1]: Stopped target network.target - Network. Jan 17 00:05:49.205286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:05:49.205342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:49.216606 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:05:49.224975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:05:49.230151 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:49.239142 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:05:49.247022 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:05:49.254740 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:05:49.254793 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:49.263597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:05:49.263642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:49.272031 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:05:49.272081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:05:49.281060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:05:49.281101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:49.289694 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:05:49.302661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:05:49.312489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:05:49.313099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:05:49.313192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:05:49.316224 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 17 00:05:49.323329 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:05:49.323420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:05:49.333279 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:05:49.496694 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: Data path switched from VF: enP53370s1 Jan 17 00:05:49.333385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:05:49.345049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:05:49.345115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:49.354286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:05:49.354356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:49.379344 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:05:49.386819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:05:49.386891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:49.396087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:49.408603 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:05:49.408710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:05:49.438017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:05:49.438184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:49.446026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:05:49.446086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:49.455922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:05:49.455968 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:49.464797 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:05:49.464995 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:49.474430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:05:49.474496 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:49.483187 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:05:49.483228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:49.500260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:05:49.500346 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:49.509250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:05:49.509299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:49.522494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:49.522546 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:49.565325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:05:49.578194 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:05:49.578262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:49.586725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:05:49.586765 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:05:49.596643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:05:49.596689 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:49.606110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:49.606160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:49.616533 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:05:49.616651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:05:49.624439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:05:49.624529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:05:49.635310 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:05:49.653551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:05:49.743353 systemd[1]: Switching root. Jan 17 00:05:49.784848 systemd-journald[218]: Journal stopped Jan 17 00:05:39.189712 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:05:39.189733 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:05:39.189742 kernel: KASLR enabled Jan 17 00:05:39.189748 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:05:39.189755 kernel: printk: bootconsole [pl11] enabled Jan 17 00:05:39.189761 kernel: efi: EFI v2.7 by EDK II Jan 17 00:05:39.189768 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:05:39.189774 kernel: random: crng init done Jan 17 00:05:39.189781 kernel: ACPI: Early table checksum verification disabled Jan 17 00:05:39.189787 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:05:39.189793 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189799 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189806 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:05:39.189812 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189820 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189826 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189832 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189840 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189847 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189853 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:05:39.189860 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:05:39.189866 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:05:39.189872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:05:39.189879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:05:39.189885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:05:39.189891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:05:39.189898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:05:39.189904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:05:39.189912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:05:39.189918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:05:39.189925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:05:39.189931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:05:39.189937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:05:39.189944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:05:39.189950 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:05:39.189956 kernel: Zone ranges: Jan 17 00:05:39.189963 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:05:39.189969 kernel: DMA32 empty Jan 17 00:05:39.189975 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:05:39.189982 kernel: Movable zone start for each node Jan 17 00:05:39.189992 kernel: Early memory node ranges Jan 17 00:05:39.189999 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:05:39.190006 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:05:39.190012 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:05:39.190019 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:05:39.190027 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:05:39.190034 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:05:39.190041 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:05:39.190047 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:05:39.190054 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:05:39.190061 kernel: psci: probing for conduit method from ACPI. Jan 17 00:05:39.190068 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:05:39.190074 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:05:39.190081 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:05:39.190088 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:05:39.190095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:05:39.190101 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:05:39.190110 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:05:39.190117 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:05:39.190123 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:05:39.190130 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:05:39.190137 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:05:39.190144 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:05:39.190150 kernel: CPU features: detected: Spectre-BHB Jan 17 00:05:39.190157 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:05:39.190164 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:05:39.190171 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:05:39.190177 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:05:39.190186 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:05:39.190192 kernel: alternatives: applying boot alternatives Jan 17 00:05:39.190200 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:05:39.190208 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:05:39.190214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:05:39.190221 kernel: Fallback order for Node 0: 0 Jan 17 00:05:39.191300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:05:39.191312 kernel: Policy zone: Normal Jan 17 00:05:39.191319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:05:39.191326 kernel: software IO TLB: area num 2. Jan 17 00:05:39.191333 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:05:39.191345 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:05:39.191352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:05:39.191358 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:05:39.191366 kernel: rcu: RCU event tracing is enabled. Jan 17 00:05:39.191373 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:05:39.191380 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:05:39.191387 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:05:39.191394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:05:39.191401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:05:39.191407 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:05:39.191414 kernel: GICv3: 960 SPIs implemented Jan 17 00:05:39.191422 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:05:39.191429 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:05:39.191436 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:05:39.191443 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:05:39.191450 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:05:39.191457 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:05:39.191464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:05:39.191470 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:05:39.191477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:05:39.191484 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:05:39.191492 kernel: Console: colour dummy device 80x25 Jan 17 00:05:39.191500 kernel: printk: console [tty1] enabled Jan 17 00:05:39.191507 kernel: ACPI: Core revision 20230628 Jan 17 00:05:39.191515 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:05:39.191522 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:05:39.191529 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:05:39.191536 kernel: landlock: Up and running. Jan 17 00:05:39.191542 kernel: SELinux: Initializing. Jan 17 00:05:39.191549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.191556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.191565 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:05:39.191572 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:05:39.191580 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:05:39.191587 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:05:39.191594 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:05:39.191601 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:05:39.191608 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:05:39.191615 kernel: Remapping and enabling EFI services. Jan 17 00:05:39.191628 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:05:39.191636 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:05:39.191643 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:05:39.191651 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:05:39.191659 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:05:39.191666 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:05:39.191674 kernel: SMP: Total of 2 processors activated. Jan 17 00:05:39.191682 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:05:39.191689 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:05:39.191698 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:05:39.191705 kernel: CPU features: detected: CRC32 instructions Jan 17 00:05:39.191713 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:05:39.191720 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:05:39.191727 kernel: CPU features: detected: Privileged Access Never Jan 17 00:05:39.191734 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:05:39.191742 kernel: alternatives: applying system-wide alternatives Jan 17 00:05:39.191749 kernel: devtmpfs: initialized Jan 17 00:05:39.191756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:05:39.191765 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:05:39.191773 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:05:39.191780 kernel: SMBIOS 3.1.0 present. Jan 17 00:05:39.191787 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:05:39.191795 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:05:39.191802 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:05:39.191810 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:05:39.191817 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:05:39.191825 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:05:39.191834 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:05:39.191841 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:05:39.191849 kernel: cpuidle: using governor menu Jan 17 00:05:39.191856 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:05:39.191863 kernel: ASID allocator initialised with 32768 entries Jan 17 00:05:39.191871 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:05:39.191878 kernel: Serial: AMBA PL011 UART driver Jan 17 00:05:39.191886 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:05:39.191893 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:05:39.191902 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:05:39.191909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:05:39.191916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:05:39.191924 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:05:39.191931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:05:39.191939 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:05:39.191946 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:05:39.191954 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:05:39.191961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:05:39.191970 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:05:39.191977 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:05:39.191985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:05:39.191992 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:05:39.191999 kernel: ACPI: Interpreter enabled Jan 17 00:05:39.192006 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:05:39.192014 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:05:39.192021 kernel: printk: console [ttyAMA0] enabled Jan 17 00:05:39.192029 kernel: printk: bootconsole [pl11] disabled Jan 17 00:05:39.192038 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:05:39.192045 kernel: iommu: Default domain type: Translated Jan 17 00:05:39.192053 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:05:39.192060 kernel: efivars: Registered efivars operations Jan 17 00:05:39.192067 kernel: vgaarb: loaded Jan 17 00:05:39.192075 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:05:39.192082 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:05:39.192090 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:05:39.192097 kernel: pnp: PnP ACPI init Jan 17 00:05:39.192106 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:05:39.192114 kernel: NET: Registered PF_INET protocol family Jan 17 00:05:39.192121 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:05:39.192129 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:05:39.192136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:05:39.192144 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:05:39.192151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:05:39.192159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:05:39.192166 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.192174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:05:39.192182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:05:39.192189 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:05:39.192196 kernel: kvm [1]: HYP mode not available Jan 17 00:05:39.192204 kernel: Initialise system trusted keyrings Jan 17 00:05:39.192211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:05:39.192219 kernel: Key type asymmetric registered Jan 17 00:05:39.192232 kernel: Asymmetric key parser 'x509' registered Jan 17 00:05:39.192241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:05:39.192250 kernel: io scheduler mq-deadline registered Jan 17 00:05:39.192257 kernel: io scheduler kyber registered Jan 17 00:05:39.192265 kernel: io scheduler bfq registered Jan 17 00:05:39.192272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:05:39.192279 kernel: thunder_xcv, ver 1.0 Jan 17 00:05:39.192287 kernel: thunder_bgx, ver 1.0 Jan 17 00:05:39.192294 kernel: nicpf, ver 1.0 Jan 17 00:05:39.192302 kernel: nicvf, ver 1.0 Jan 17 00:05:39.192428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:05:39.192501 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:05:38 UTC (1768608338) Jan 17 00:05:39.192512 kernel: efifb: probing for efifb Jan 17 00:05:39.192519 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:05:39.192527 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:05:39.192534 kernel: efifb: scrolling: redraw Jan 17 00:05:39.192541 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:05:39.192549 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:05:39.192556 kernel: fb0: EFI VGA frame buffer device Jan 17 00:05:39.192565 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:05:39.192572 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:05:39.192580 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:05:39.192587 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:05:39.192594 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:05:39.192602 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:05:39.192609 kernel: Segment Routing with IPv6 Jan 17 00:05:39.192616 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:05:39.192624 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:05:39.192632 kernel: Key type dns_resolver registered Jan 17 00:05:39.192640 kernel: registered taskstats version 1 Jan 17 00:05:39.192647 kernel: Loading compiled-in X.509 certificates Jan 17 00:05:39.192654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:05:39.192661 kernel: Key type .fscrypt registered Jan 17 00:05:39.192669 kernel: Key type fscrypt-provisioning registered Jan 17 00:05:39.192676 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:05:39.192683 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:05:39.192691 kernel: ima: No architecture policies found Jan 17 00:05:39.192699 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:05:39.192707 kernel: clk: Disabling unused clocks Jan 17 00:05:39.192714 kernel: Freeing unused kernel memory: 39424K Jan 17 00:05:39.192721 kernel: Run /init as init process Jan 17 00:05:39.192729 kernel: with arguments: Jan 17 00:05:39.192736 kernel: /init Jan 17 00:05:39.192743 kernel: with environment: Jan 17 00:05:39.192750 kernel: HOME=/ Jan 17 00:05:39.192758 kernel: TERM=linux Jan 17 00:05:39.192767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:05:39.192778 systemd[1]: Detected virtualization microsoft. Jan 17 00:05:39.192786 systemd[1]: Detected architecture arm64. Jan 17 00:05:39.192794 systemd[1]: Running in initrd. Jan 17 00:05:39.192801 systemd[1]: No hostname configured, using default hostname. Jan 17 00:05:39.192809 systemd[1]: Hostname set to . Jan 17 00:05:39.192817 systemd[1]: Initializing machine ID from random generator. Jan 17 00:05:39.192827 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:05:39.192835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:39.192843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:39.192851 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:05:39.192859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:05:39.192867 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:05:39.192876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:05:39.192885 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:05:39.192895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:05:39.192903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:39.192912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:39.192919 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:05:39.192927 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:05:39.192935 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:05:39.192943 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:05:39.192951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:39.192960 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:39.192968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:05:39.192976 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:05:39.192984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:39.192993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:39.193001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:39.193008 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:05:39.193016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:05:39.193026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:05:39.193034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:05:39.193042 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:05:39.193050 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:05:39.193074 systemd-journald[218]: Collecting audit messages is disabled. Jan 17 00:05:39.193096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:05:39.193104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:39.193113 systemd-journald[218]: Journal started Jan 17 00:05:39.193131 systemd-journald[218]: Runtime Journal (/run/log/journal/4c17d89951fd4ac5b4837b8b0a6352b4) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:05:39.199397 systemd-modules-load[219]: Inserted module 'overlay' Jan 17 00:05:39.221800 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:05:39.222566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:39.243516 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:05:39.243538 kernel: Bridge firewalling registered Jan 17 00:05:39.239528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:39.242794 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 17 00:05:39.249074 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:05:39.257491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:39.266055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:39.285507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:39.297314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:05:39.304406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:05:39.323459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:05:39.329082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:39.333817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:05:39.349248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:39.361006 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:39.379589 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:05:39.385378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:05:39.400777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:05:39.420281 dracut-cmdline[249]: dracut-dracut-053 Jan 17 00:05:39.429683 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:05:39.437042 systemd-resolved[251]: Positive Trust Anchors: Jan 17 00:05:39.437051 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:05:39.437082 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:05:39.439182 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 17 00:05:39.450400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:05:39.459843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:39.469755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:39.577285 kernel: SCSI subsystem initialized Jan 17 00:05:39.584239 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:05:39.594235 kernel: iscsi: registered transport (tcp) Jan 17 00:05:39.612034 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:05:39.612089 kernel: QLogic iSCSI HBA Driver Jan 17 00:05:39.645481 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:39.660741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:05:39.689929 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:05:39.689988 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:05:39.695210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:05:39.743248 kernel: raid6: neonx8 gen() 15801 MB/s Jan 17 00:05:39.762240 kernel: raid6: neonx4 gen() 15691 MB/s Jan 17 00:05:39.781232 kernel: raid6: neonx2 gen() 13258 MB/s Jan 17 00:05:39.802237 kernel: raid6: neonx1 gen() 10549 MB/s Jan 17 00:05:39.821231 kernel: raid6: int64x8 gen() 6979 MB/s Jan 17 00:05:39.840235 kernel: raid6: int64x4 gen() 7374 MB/s Jan 17 00:05:39.860236 kernel: raid6: int64x2 gen() 6146 MB/s Jan 17 00:05:39.882063 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:05:39.882073 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Jan 17 00:05:39.905054 kernel: raid6: .... xor() 11969 MB/s, rmw enabled Jan 17 00:05:39.905098 kernel: raid6: using neon recovery algorithm Jan 17 00:05:39.913234 kernel: xor: measuring software checksum speed Jan 17 00:05:39.913247 kernel: 8regs : 19035 MB/sec Jan 17 00:05:39.918659 kernel: 32regs : 19487 MB/sec Jan 17 00:05:39.921562 kernel: arm64_neon : 27043 MB/sec Jan 17 00:05:39.925273 kernel: xor: using function: arm64_neon (27043 MB/sec) Jan 17 00:05:39.974235 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:05:39.983993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:39.996394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:40.015233 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 17 00:05:40.020346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:40.044463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:05:40.060386 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Jan 17 00:05:40.087389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:40.108479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:05:40.148825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:40.168412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:05:40.192031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:40.202478 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:40.214390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:40.225456 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:05:40.243241 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:05:40.245430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:05:40.267516 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:05:40.267541 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:05:40.282836 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:05:40.282889 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:05:40.293704 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:05:40.293728 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:05:40.277506 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:40.299060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:40.299241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:40.323933 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:40.352496 kernel: PTP clock support registered Jan 17 00:05:40.352521 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:05:40.352532 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:05:40.352542 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:05:40.343953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:40.375151 kernel: scsi host0: storvsc_host_t Jan 17 00:05:40.375360 kernel: scsi host1: storvsc_host_t Jan 17 00:05:40.375451 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:05:40.345139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:40.373397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:40.409802 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:05:40.409971 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:05:40.409982 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:05:40.415871 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:05:40.415917 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:05:40.418609 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:05:40.419089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:40.449675 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 17 00:05:40.477767 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:05:40.477927 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: VF slot 1 added Jan 17 00:05:40.477846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:40.498529 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:05:40.498709 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:05:40.498803 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:05:40.500148 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:05:40.501358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:05:40.512167 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:05:40.512314 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:05:40.522139 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:05:40.527345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:40.527385 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:05:40.533148 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:05:40.533191 kernel: hv_pci 5a9dc8c6-d07a-4907-bee0-9910873d7f06: PCI VMBus probing: Using version 0x10004 Jan 17 00:05:40.552897 kernel: hv_pci 5a9dc8c6-d07a-4907-bee0-9910873d7f06: PCI host bridge to bus d07a:00 Jan 17 00:05:40.553128 kernel: pci_bus d07a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:05:40.557998 kernel: pci_bus d07a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:05:40.565188 kernel: pci d07a:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:05:40.574413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:40.596488 kernel: pci d07a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:05:40.596534 kernel: pci d07a:00:02.0: enabling Extended Tags Jan 17 00:05:40.596549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#280 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:40.596712 kernel: pci d07a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d07a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:05:40.607165 kernel: pci_bus d07a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:05:40.616590 kernel: pci d07a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:05:40.638153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:40.667539 kernel: mlx5_core d07a:00:02.0: enabling device (0000 -> 0002) Jan 17 00:05:40.674148 kernel: mlx5_core d07a:00:02.0: firmware version: 16.30.5026 Jan 17 00:05:40.873954 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: VF registering: eth1 Jan 17 00:05:40.874174 kernel: mlx5_core d07a:00:02.0 eth1: joined to eth0 Jan 17 00:05:40.879599 kernel: mlx5_core d07a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:05:40.890154 kernel: mlx5_core d07a:00:02.0 enP53370s1: renamed from eth1 Jan 17 00:05:41.164555 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:05:41.178575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:05:41.206624 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (498) Jan 17 00:05:41.219276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:05:41.225030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:05:41.250153 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (493) Jan 17 00:05:41.252380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:05:41.276245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:05:41.288234 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:42.279885 disk-uuid[607]: The operation has completed successfully. Jan 17 00:05:42.284809 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:05:42.342015 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:05:42.342108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:05:42.383228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:05:42.394054 sh[720]: Success Jan 17 00:05:42.427182 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:05:42.690169 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:05:42.708231 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:05:42.715803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:05:42.746860 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:05:42.746925 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:42.752501 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:05:42.756903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:05:42.761542 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:05:43.104949 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:05:43.109096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:05:43.130378 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:05:43.137293 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:05:43.170613 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:43.170659 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:43.173854 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:43.224178 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:43.239039 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:05:43.243080 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:43.244684 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:43.260315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:05:43.267090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:05:43.284273 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:05:43.304728 systemd-networkd[902]: lo: Link UP Jan 17 00:05:43.304736 systemd-networkd[902]: lo: Gained carrier Jan 17 00:05:43.306256 systemd-networkd[902]: Enumeration completed Jan 17 00:05:43.306423 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:05:43.314336 systemd[1]: Reached target network.target - Network. Jan 17 00:05:43.317255 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:43.317258 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:05:43.392246 kernel: mlx5_core d07a:00:02.0 enP53370s1: Link up Jan 17 00:05:43.431368 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: Data path switched to VF: enP53370s1 Jan 17 00:05:43.431038 systemd-networkd[902]: enP53370s1: Link UP Jan 17 00:05:43.431138 systemd-networkd[902]: eth0: Link UP Jan 17 00:05:43.431247 systemd-networkd[902]: eth0: Gained carrier Jan 17 00:05:43.431256 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:43.450441 systemd-networkd[902]: enP53370s1: Gained carrier Jan 17 00:05:43.465155 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:05:44.389653 ignition[904]: Ignition 2.19.0 Jan 17 00:05:44.389665 ignition[904]: Stage: fetch-offline Jan 17 00:05:44.393107 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:44.389698 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.389706 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.389811 ignition[904]: parsed url from cmdline: "" Jan 17 00:05:44.389817 ignition[904]: no config URL provided Jan 17 00:05:44.389822 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:05:44.416464 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:05:44.389828 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:05:44.389833 ignition[904]: failed to fetch config: resource requires networking Jan 17 00:05:44.390033 ignition[904]: Ignition finished successfully Jan 17 00:05:44.434272 ignition[917]: Ignition 2.19.0 Jan 17 00:05:44.434280 ignition[917]: Stage: fetch Jan 17 00:05:44.434498 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.434512 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.434615 ignition[917]: parsed url from cmdline: "" Jan 17 00:05:44.434618 ignition[917]: no config URL provided Jan 17 00:05:44.434622 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:05:44.434630 ignition[917]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:05:44.434654 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:05:44.562270 ignition[917]: GET result: OK Jan 17 00:05:44.562335 ignition[917]: config has been read from IMDS userdata Jan 17 00:05:44.562374 ignition[917]: parsing config with SHA512: 3a7d8d8bf2cbe34bac779aa2912689b3afcd538c14a6defc198ae6fba91359cddb5ceffae4ec5c43ef1ed9418c9ba37642497ca45abdca33ac09d4bb2abde534 Jan 17 00:05:44.566705 unknown[917]: fetched base config from "system" Jan 17 00:05:44.567186 ignition[917]: fetch: fetch complete Jan 17 00:05:44.566711 unknown[917]: fetched base config from "system" Jan 17 00:05:44.567190 ignition[917]: fetch: fetch passed Jan 17 00:05:44.566721 unknown[917]: fetched user config from "azure" Jan 17 00:05:44.567237 ignition[917]: Ignition finished successfully Jan 17 00:05:44.570281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:05:44.592286 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:05:44.609426 ignition[924]: Ignition 2.19.0 Jan 17 00:05:44.609440 ignition[924]: Stage: kargs Jan 17 00:05:44.609614 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.615795 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:05:44.609625 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.610618 ignition[924]: kargs: kargs passed Jan 17 00:05:44.610660 ignition[924]: Ignition finished successfully Jan 17 00:05:44.635250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:05:44.648221 systemd-networkd[902]: eth0: Gained IPv6LL Jan 17 00:05:44.653817 ignition[930]: Ignition 2.19.0 Jan 17 00:05:44.653827 ignition[930]: Stage: disks Jan 17 00:05:44.657712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:05:44.653992 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:44.664245 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:44.654002 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:44.673541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:05:44.654909 ignition[930]: disks: disks passed Jan 17 00:05:44.683098 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:05:44.654949 ignition[930]: Ignition finished successfully Jan 17 00:05:44.692524 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:05:44.702245 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:05:44.723351 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:05:44.805019 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:05:44.813627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:05:44.828310 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:05:44.882172 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:05:44.883468 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:05:44.887186 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:05:44.937277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:05:44.957144 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Jan 17 00:05:44.958100 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:05:44.979919 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:44.979943 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:44.979953 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:44.971290 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:05:44.985391 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:05:44.985428 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:45.012092 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:05:45.024047 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:45.027257 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:05:45.041334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:05:45.608026 coreos-metadata[951]: Jan 17 00:05:45.607 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:05:45.617034 coreos-metadata[951]: Jan 17 00:05:45.617 INFO Fetch successful Jan 17 00:05:45.617034 coreos-metadata[951]: Jan 17 00:05:45.617 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:05:45.630581 coreos-metadata[951]: Jan 17 00:05:45.628 INFO Fetch successful Jan 17 00:05:45.645200 coreos-metadata[951]: Jan 17 00:05:45.645 INFO wrote hostname ci-4081.3.6-n-93f9562822 to /sysroot/etc/hostname Jan 17 00:05:45.653068 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:45.885169 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:05:45.923807 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:05:45.945250 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:05:45.953360 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:05:47.109605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:47.120292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:05:47.132257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:05:47.146388 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:47.143001 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:05:47.170991 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:05:47.180716 ignition[1066]: INFO : Ignition 2.19.0 Jan 17 00:05:47.180716 ignition[1066]: INFO : Stage: mount Jan 17 00:05:47.180716 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:47.180716 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:47.180716 ignition[1066]: INFO : mount: mount passed Jan 17 00:05:47.180716 ignition[1066]: INFO : Ignition finished successfully Jan 17 00:05:47.183796 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:05:47.211206 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:05:47.223810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:05:47.258135 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Jan 17 00:05:47.264131 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:05:47.264163 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:05:47.272468 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:05:47.279130 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:05:47.280774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:05:47.308017 ignition[1095]: INFO : Ignition 2.19.0 Jan 17 00:05:47.308017 ignition[1095]: INFO : Stage: files Jan 17 00:05:47.314411 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:47.314411 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:47.314411 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:05:47.314411 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:05:47.314411 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:05:47.363160 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:05:47.369197 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:05:47.369197 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:05:47.363518 unknown[1095]: wrote ssh authorized keys file for user: core Jan 17 00:05:47.388851 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:05:47.396731 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:47.440444 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:05:47.558474 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:47.774746 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:47.865140 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:05:48.328031 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:05:48.554926 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:05:48.554926 ignition[1095]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:05:48.580994 ignition[1095]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:48.591974 ignition[1095]: INFO : files: files passed Jan 17 00:05:48.591974 ignition[1095]: INFO : Ignition finished successfully Jan 17 00:05:48.596151 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:05:48.626376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:05:48.641273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:05:48.721738 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.721738 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.652312 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:05:48.746160 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:48.652409 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:05:48.700663 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:48.706382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:05:48.729415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:05:48.786295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:05:48.786409 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:05:48.796093 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:05:48.805498 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:05:48.814021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:05:48.816264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:05:48.847395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:48.867340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:05:48.881222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:48.886482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:48.896064 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:05:48.904763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:05:48.904881 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:48.917379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:05:48.921778 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:05:48.930850 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:05:48.939545 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:48.948213 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:48.957384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:05:48.966628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:48.976462 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:05:48.985201 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:05:48.994673 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:05:49.002209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:05:49.002330 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:49.013992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:49.018787 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:49.028136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:05:49.028207 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:49.038375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:05:49.038494 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:49.052755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:05:49.052907 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:49.063268 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:05:49.063361 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:05:49.073343 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:05:49.073435 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:49.102387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:05:49.118316 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:05:49.127212 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:05:49.157827 ignition[1147]: INFO : Ignition 2.19.0 Jan 17 00:05:49.157827 ignition[1147]: INFO : Stage: umount Jan 17 00:05:49.157827 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:49.157827 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:49.157827 ignition[1147]: INFO : umount: umount passed Jan 17 00:05:49.157827 ignition[1147]: INFO : Ignition finished successfully Jan 17 00:05:49.127354 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:49.138191 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:05:49.138294 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:49.153827 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:05:49.153928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:05:49.165972 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:05:49.166286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:05:49.173362 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:05:49.173414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:05:49.189578 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:05:49.189630 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:05:49.197537 systemd[1]: Stopped target network.target - Network. Jan 17 00:05:49.205286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:05:49.205342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:49.216606 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:05:49.224975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:05:49.230151 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:49.239142 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:05:49.247022 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:05:49.254740 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:05:49.254793 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:49.263597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:05:49.263642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:49.272031 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:05:49.272081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:05:49.281060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:05:49.281101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:49.289694 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:05:49.302661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:05:49.312489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:05:49.313099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:05:49.313192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:05:49.316224 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 17 00:05:49.323329 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:05:49.323420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:05:49.333279 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:05:49.496694 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: Data path switched from VF: enP53370s1 Jan 17 00:05:49.333385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:05:49.345049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:05:49.345115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:49.354286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:05:49.354356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:49.379344 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:05:49.386819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:05:49.386891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:49.396087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:49.408603 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:05:49.408710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:05:49.438017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:05:49.438184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:49.446026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:05:49.446086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:49.455922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:05:49.455968 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:49.464797 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:05:49.464995 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:49.474430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:05:49.474496 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:49.483187 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:05:49.483228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:49.500260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:05:49.500346 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:49.509250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:05:49.509299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:49.522494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:49.522546 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:49.565325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:05:49.578194 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:05:49.578262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:49.586725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:05:49.586765 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:05:49.596643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:05:49.596689 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:49.606110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:49.606160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:49.616533 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:05:49.616651 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:05:49.624439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:05:49.624529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:05:49.635310 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:05:49.653551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:05:49.743353 systemd[1]: Switching root. Jan 17 00:05:49.784848 systemd-journald[218]: Journal stopped Jan 17 00:05:55.306654 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 17 00:05:55.306690 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:05:55.306700 kernel: SELinux: policy capability open_perms=1 Jan 17 00:05:55.306713 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:05:55.306721 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:05:55.306729 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:05:55.306737 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:05:55.306746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:05:55.306754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:05:55.306762 kernel: audit: type=1403 audit(1768608352.237:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:05:55.306773 systemd[1]: Successfully loaded SELinux policy in 181.773ms. Jan 17 00:05:55.306782 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.797ms. Jan 17 00:05:55.306792 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:05:55.306803 systemd[1]: Detected virtualization microsoft. Jan 17 00:05:55.306856 systemd[1]: Detected architecture arm64. Jan 17 00:05:55.306869 systemd[1]: Detected first boot. Jan 17 00:05:55.306884 systemd[1]: Hostname set to . Jan 17 00:05:55.306893 systemd[1]: Initializing machine ID from random generator. Jan 17 00:05:55.306903 zram_generator::config[1206]: No configuration found. Jan 17 00:05:55.306913 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:05:55.306922 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:05:55.306935 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:05:55.306945 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:05:55.306954 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:05:55.306963 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:05:55.306973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:05:55.306982 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:05:55.306992 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:05:55.307003 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:05:55.307013 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:05:55.307022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:55.307031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:55.307041 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:05:55.307050 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:05:55.307060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:05:55.307070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:05:55.307079 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:05:55.307090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:55.307100 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:05:55.307109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:55.307134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:05:55.307146 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:05:55.307155 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:05:55.307165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:05:55.307176 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:05:55.307186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:05:55.307195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:05:55.307204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:55.307214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:55.307223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:55.307233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:05:55.307244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:05:55.307254 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:05:55.307263 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:05:55.307273 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:05:55.307283 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:05:55.307293 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:05:55.307304 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:05:55.307314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:05:55.307324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:05:55.307333 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:05:55.307343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:05:55.307353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:05:55.307362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:05:55.307372 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:05:55.307381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:05:55.307393 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:05:55.307403 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:05:55.307413 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:05:55.307423 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:05:55.307432 kernel: fuse: init (API version 7.39) Jan 17 00:05:55.307441 kernel: loop: module loaded Jan 17 00:05:55.307450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:05:55.307459 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:05:55.307471 kernel: ACPI: bus type drm_connector registered Jan 17 00:05:55.307479 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:05:55.307512 systemd-journald[1304]: Collecting audit messages is disabled. Jan 17 00:05:55.307531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:05:55.307543 systemd-journald[1304]: Journal started Jan 17 00:05:55.307563 systemd-journald[1304]: Runtime Journal (/run/log/journal/54c28b55805d498999d05be3e574b3e0) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:05:55.324037 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:05:55.325211 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:05:55.330319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:05:55.335393 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:05:55.339753 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:05:55.344747 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:05:55.349950 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:05:55.354473 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:05:55.359981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:55.365644 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:05:55.365860 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:05:55.371352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:05:55.371560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:05:55.376808 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:05:55.377021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:05:55.381700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:05:55.381911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:05:55.387626 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:05:55.387831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:05:55.392941 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:05:55.393272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:05:55.398412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:55.403710 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:05:55.409832 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:05:55.415607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:55.431590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:05:55.440247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:05:55.447743 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:05:55.452859 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:05:55.456078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:05:55.467326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:05:55.472729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:05:55.473810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:05:55.478547 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:05:55.481281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:05:55.488900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:05:55.498382 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:05:55.508281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:05:55.513643 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:05:55.519445 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:05:55.529138 systemd-journald[1304]: Time spent on flushing to /var/log/journal/54c28b55805d498999d05be3e574b3e0 is 13.166ms for 885 entries. Jan 17 00:05:55.529138 systemd-journald[1304]: System Journal (/var/log/journal/54c28b55805d498999d05be3e574b3e0) is 8.0M, max 2.6G, 2.6G free. Jan 17 00:05:55.574092 systemd-journald[1304]: Received client request to flush runtime journal. Jan 17 00:05:55.536647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:05:55.544287 udevadm[1368]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:05:55.576204 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:05:55.600908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:55.637681 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 17 00:05:55.637695 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 17 00:05:55.644192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:05:55.657257 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:05:55.715711 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:05:55.728359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:05:55.744118 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 17 00:05:55.744143 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 17 00:05:55.749509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:56.129986 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:05:56.139265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:56.171965 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jan 17 00:05:56.284305 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:56.304075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:05:56.337934 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 17 00:05:56.366787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:05:56.425142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:56.454726 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:05:56.449957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:05:56.474180 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:05:56.474267 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:05:56.483213 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 00:05:56.498023 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:05:56.498102 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:05:56.503327 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:05:56.506114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:56.518137 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:05:56.523679 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:05:56.524661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:56.524899 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:56.542372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:56.553362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:56.553606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:56.563275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:56.603141 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1402) Jan 17 00:05:56.658546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:05:56.673522 systemd-networkd[1403]: lo: Link UP Jan 17 00:05:56.673529 systemd-networkd[1403]: lo: Gained carrier Jan 17 00:05:56.675857 systemd-networkd[1403]: Enumeration completed Jan 17 00:05:56.676040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:05:56.676362 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:56.676367 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:05:56.689333 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:05:56.732148 kernel: mlx5_core d07a:00:02.0 enP53370s1: Link up Jan 17 00:05:56.757187 kernel: hv_netvsc 000d3a06-ba46-000d-3a06-ba46000d3a06 eth0: Data path switched to VF: enP53370s1 Jan 17 00:05:56.757648 systemd-networkd[1403]: enP53370s1: Link UP Jan 17 00:05:56.757738 systemd-networkd[1403]: eth0: Link UP Jan 17 00:05:56.757741 systemd-networkd[1403]: eth0: Gained carrier Jan 17 00:05:56.757755 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:56.762342 systemd-networkd[1403]: enP53370s1: Gained carrier Jan 17 00:05:56.772169 systemd-networkd[1403]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:05:56.831178 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:05:56.843276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:05:56.949326 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:05:56.979494 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:05:56.985880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:56.992196 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:57.003238 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:05:57.007414 lvm[1492]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:05:57.027640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:05:57.033553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:05:57.038939 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:05:57.038963 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:05:57.043460 systemd[1]: Reached target machines.target - Containers. Jan 17 00:05:57.048734 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:05:57.060290 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:05:57.066812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:05:57.071380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:05:57.072310 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:05:57.080310 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:05:57.089082 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:05:57.105194 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:05:57.118241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:05:57.141140 kernel: loop0: detected capacity change from 0 to 31320 Jan 17 00:05:57.159718 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:05:57.161109 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:05:57.522140 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:05:57.602145 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 00:05:58.127134 kernel: loop2: detected capacity change from 0 to 207008 Jan 17 00:05:58.218151 kernel: loop3: detected capacity change from 0 to 114328 Jan 17 00:05:58.596233 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 17 00:05:58.599002 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:05:58.629144 kernel: loop4: detected capacity change from 0 to 31320 Jan 17 00:05:58.641143 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 00:05:58.654147 kernel: loop6: detected capacity change from 0 to 207008 Jan 17 00:05:58.670156 kernel: loop7: detected capacity change from 0 to 114328 Jan 17 00:05:58.677082 (sd-merge)[1515]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:05:58.677532 (sd-merge)[1515]: Merged extensions into '/usr'. Jan 17 00:05:58.681202 systemd[1]: Reloading requested from client PID 1499 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:05:58.681313 systemd[1]: Reloading... Jan 17 00:05:58.746241 zram_generator::config[1548]: No configuration found. Jan 17 00:05:58.879336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:58.957824 systemd[1]: Reloading finished in 276 ms. Jan 17 00:05:58.970787 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:05:58.994313 systemd[1]: Starting ensure-sysext.service... Jan 17 00:05:58.999219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:05:59.006510 systemd[1]: Reloading requested from client PID 1603 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:05:59.006525 systemd[1]: Reloading... Jan 17 00:05:59.037585 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:05:59.037848 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:05:59.038497 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:05:59.038713 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 17 00:05:59.038757 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 17 00:05:59.044833 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:05:59.044850 systemd-tmpfiles[1604]: Skipping /boot Jan 17 00:05:59.056244 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:05:59.056272 systemd-tmpfiles[1604]: Skipping /boot Jan 17 00:05:59.075178 zram_generator::config[1631]: No configuration found. Jan 17 00:05:59.186308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:59.261679 systemd[1]: Reloading finished in 254 ms. Jan 17 00:05:59.277224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:59.297361 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:05:59.304839 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:05:59.312115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:05:59.321271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:05:59.329022 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:05:59.344649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:05:59.351603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:05:59.358593 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:05:59.371333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:05:59.380276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:05:59.387823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:05:59.387881 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:05:59.393965 systemd[1]: Finished ensure-sysext.service. Jan 17 00:05:59.398447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:05:59.398706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:05:59.404340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:05:59.404581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:05:59.413107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:05:59.413450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:05:59.420500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:05:59.420695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:05:59.431552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:05:59.431701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:05:59.432991 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:05:59.446996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:05:59.469079 systemd-resolved[1702]: Positive Trust Anchors: Jan 17 00:05:59.469096 systemd-resolved[1702]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:05:59.469139 systemd-resolved[1702]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:05:59.489154 systemd-resolved[1702]: Using system hostname 'ci-4081.3.6-n-93f9562822'. Jan 17 00:05:59.490687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:05:59.496335 augenrules[1737]: No rules Jan 17 00:05:59.496254 systemd[1]: Reached target network.target - Network. Jan 17 00:05:59.500588 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:05:59.505680 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:59.511391 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:05:59.943199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:05:59.948986 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:06:03.025983 ldconfig[1496]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:06:03.039773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:06:03.049334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:06:03.063636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:06:03.069423 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:06:03.074577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:06:03.080209 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:06:03.086184 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:06:03.090690 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:06:03.096059 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:06:03.101704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:06:03.101732 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:06:03.105720 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:06:03.111222 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:06:03.117957 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:06:03.123688 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:06:03.131139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:06:03.136022 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:06:03.140134 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:06:03.144227 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:06:03.144265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:06:03.144285 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:06:03.146687 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:06:03.153245 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:06:03.161277 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:06:03.171271 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:06:03.185302 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:06:03.194226 (chronyd)[1753]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:06:03.194775 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:06:03.199463 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:06:03.199569 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:06:03.202278 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:06:03.209028 KVP[1762]: KVP starting; pid is:1762 Jan 17 00:06:03.209387 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:06:03.211635 jq[1760]: false Jan 17 00:06:03.211937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:03.219375 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:06:03.227097 chronyd[1768]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:06:03.235300 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:06:03.244421 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:06:03.253048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:06:03.259397 KVP[1762]: KVP LIC Version: 3.1 Jan 17 00:06:03.262430 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:06:03.267893 extend-filesystems[1761]: Found loop4 Jan 17 00:06:03.270714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:06:03.282691 chronyd[1768]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:06:03.284346 extend-filesystems[1761]: Found loop5 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found loop6 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found loop7 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda1 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda2 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda3 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found usr Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda4 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda6 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda7 Jan 17 00:06:03.284346 extend-filesystems[1761]: Found sda9 Jan 17 00:06:03.284346 extend-filesystems[1761]: Checking size of /dev/sda9 Jan 17 00:06:03.282887 chronyd[1768]: Loaded seccomp filter (level 2) Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.394 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.399 INFO Fetch successful Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.399 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.408 INFO Fetch successful Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.408 INFO Fetching http://168.63.129.16/machine/8ed15f45-8398-4949-8b95-771a519093f5/a23d14d9%2Da608%2D49cb%2Dbb09%2D529fecad70dd.%5Fci%2D4081.3.6%2Dn%2D93f9562822?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.410 INFO Fetch successful Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.410 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:06:03.465373 coreos-metadata[1755]: Jan 17 00:06:03.423 INFO Fetch successful Jan 17 00:06:03.465597 extend-filesystems[1761]: Old size kept for /dev/sda9 Jan 17 00:06:03.465597 extend-filesystems[1761]: Found sr0 Jan 17 00:06:03.290862 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:06:03.303611 dbus-daemon[1759]: [system] SELinux support is enabled Jan 17 00:06:03.299945 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:06:03.307961 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:06:03.508091 update_engine[1789]: I20260117 00:06:03.422477 1789 main.cc:92] Flatcar Update Engine starting Jan 17 00:06:03.508091 update_engine[1789]: I20260117 00:06:03.428031 1789 update_check_scheduler.cc:74] Next update check in 2m51s Jan 17 00:06:03.321326 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:06:03.510420 jq[1794]: true Jan 17 00:06:03.339849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:06:03.363621 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:06:03.379555 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:06:03.379796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:06:03.380016 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:06:03.381331 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:06:03.410918 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:06:03.411176 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:06:03.425838 systemd-logind[1786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 17 00:06:03.518709 jq[1827]: true Jan 17 00:06:03.426054 systemd-logind[1786]: New seat seat0. Jan 17 00:06:03.433426 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:06:03.462649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:06:03.481495 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:06:03.481742 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:06:03.518422 (ntainerd)[1828]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:06:03.520649 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:06:03.553972 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:06:03.554843 dbus-daemon[1759]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:06:03.555779 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:06:03.555802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:06:03.569888 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1816) Jan 17 00:06:03.564558 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:06:03.564575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:06:03.577879 tar[1826]: linux-arm64/LICENSE Jan 17 00:06:03.573359 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:06:03.578530 tar[1826]: linux-arm64/helm Jan 17 00:06:03.587107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:06:03.592392 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:06:03.689094 bash[1865]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:06:03.693470 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:06:03.703721 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:06:03.826527 locksmithd[1873]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:06:04.302401 containerd[1828]: time="2026-01-17T00:06:04.300881980Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:06:04.339542 tar[1826]: linux-arm64/README.md Jan 17 00:06:04.355430 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:06:04.373538 containerd[1828]: time="2026-01-17T00:06:04.373501260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375257 containerd[1828]: time="2026-01-17T00:06:04.375226260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375319 containerd[1828]: time="2026-01-17T00:06:04.375257740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:06:04.375319 containerd[1828]: time="2026-01-17T00:06:04.375274780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:06:04.375520 containerd[1828]: time="2026-01-17T00:06:04.375499900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:06:04.375551 containerd[1828]: time="2026-01-17T00:06:04.375523780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375613 containerd[1828]: time="2026-01-17T00:06:04.375597140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375650 containerd[1828]: time="2026-01-17T00:06:04.375612660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375909 containerd[1828]: time="2026-01-17T00:06:04.375885620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375938 containerd[1828]: time="2026-01-17T00:06:04.375909220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375938 containerd[1828]: time="2026-01-17T00:06:04.375923180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:06:04.375938 containerd[1828]: time="2026-01-17T00:06:04.375932660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.376385 containerd[1828]: time="2026-01-17T00:06:04.376369300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.376919 containerd[1828]: time="2026-01-17T00:06:04.376900580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:06:04.377060 containerd[1828]: time="2026-01-17T00:06:04.377042020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:06:04.377083 containerd[1828]: time="2026-01-17T00:06:04.377059540Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:06:04.377272 containerd[1828]: time="2026-01-17T00:06:04.377253780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:06:04.377331 containerd[1828]: time="2026-01-17T00:06:04.377318140Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:06:04.392483 containerd[1828]: time="2026-01-17T00:06:04.392451060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:06:04.392828 containerd[1828]: time="2026-01-17T00:06:04.392760940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:06:04.392828 containerd[1828]: time="2026-01-17T00:06:04.392791860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:06:04.392888 containerd[1828]: time="2026-01-17T00:06:04.392853300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:06:04.392888 containerd[1828]: time="2026-01-17T00:06:04.392872540Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:06:04.393740 containerd[1828]: time="2026-01-17T00:06:04.393720780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:06:04.394080 containerd[1828]: time="2026-01-17T00:06:04.394061340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:06:04.394218 containerd[1828]: time="2026-01-17T00:06:04.394194260Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:06:04.394218 containerd[1828]: time="2026-01-17T00:06:04.394214500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:06:04.394274 containerd[1828]: time="2026-01-17T00:06:04.394230540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:06:04.394274 containerd[1828]: time="2026-01-17T00:06:04.394246020Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394274 containerd[1828]: time="2026-01-17T00:06:04.394263580Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394277220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394292820Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394315740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394329420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394341940Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394355020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394377820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394393100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394404940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394419 containerd[1828]: time="2026-01-17T00:06:04.394418060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394430420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394451740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394463580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394476020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394490500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394505460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394517780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394530980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394542980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394558740Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394579380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394591660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394602580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:06:04.394862 containerd[1828]: time="2026-01-17T00:06:04.394654260Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394673860Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394685340Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394697260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394707100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394719140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394728900Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:06:04.395103 containerd[1828]: time="2026-01-17T00:06:04.394739700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:06:04.395245 containerd[1828]: time="2026-01-17T00:06:04.395010500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:06:04.395245 containerd[1828]: time="2026-01-17T00:06:04.395072140Z" level=info msg="Connect containerd service" Jan 17 00:06:04.395245 containerd[1828]: time="2026-01-17T00:06:04.395101260Z" level=info msg="using legacy CRI server" Jan 17 00:06:04.395245 containerd[1828]: time="2026-01-17T00:06:04.395108340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:06:04.396865 containerd[1828]: time="2026-01-17T00:06:04.395830860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:06:04.397306 containerd[1828]: time="2026-01-17T00:06:04.397283180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.398430820Z" level=info msg="Start subscribing containerd event" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399037780Z" level=info msg="Start recovering state" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399145420Z" level=info msg="Start event monitor" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399158860Z" level=info msg="Start snapshots syncer" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399168540Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399180380Z" level=info msg="Start streaming server" Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399807140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399859900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:06:04.405015 containerd[1828]: time="2026-01-17T00:06:04.399915620Z" level=info msg="containerd successfully booted in 0.100879s" Jan 17 00:06:04.400256 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:06:04.510250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:04.510584 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:04.897492 kubelet[1914]: E0117 00:06:04.897411 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:04.899570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:04.899705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:04.919174 sshd_keygen[1796]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:06:04.937042 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:06:04.947347 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:06:04.953483 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:06:04.961081 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:06:04.961600 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:06:04.974533 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:06:04.984538 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:06:05.000566 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:06:05.007328 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:06:05.013824 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:06:05.023228 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:06:05.029206 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:06:05.033842 systemd[1]: Startup finished in 13.907s (kernel) + 12.977s (userspace) = 26.884s. Jan 17 00:06:05.397424 login[1948]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 00:06:05.398002 login[1945]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:05.408908 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:06:05.409830 systemd-logind[1786]: New session 2 of user core. Jan 17 00:06:05.419615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:06:05.463905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:06:05.471380 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:06:05.475696 (systemd)[1959]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:06:05.604620 systemd[1959]: Queued start job for default target default.target. Jan 17 00:06:05.605613 systemd[1959]: Created slice app.slice - User Application Slice. Jan 17 00:06:05.605731 systemd[1959]: Reached target paths.target - Paths. Jan 17 00:06:05.605805 systemd[1959]: Reached target timers.target - Timers. Jan 17 00:06:05.611196 systemd[1959]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:06:05.618985 systemd[1959]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:06:05.619184 systemd[1959]: Reached target sockets.target - Sockets. Jan 17 00:06:05.619266 systemd[1959]: Reached target basic.target - Basic System. Jan 17 00:06:05.619360 systemd[1959]: Reached target default.target - Main User Target. Jan 17 00:06:05.619458 systemd[1959]: Startup finished in 138ms. Jan 17 00:06:05.619473 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:06:05.626405 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:06:06.398876 login[1948]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:06.402965 systemd-logind[1786]: New session 1 of user core. Jan 17 00:06:06.409390 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:06:06.703708 waagent[1949]: 2026-01-17T00:06:06.703564Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:06:06.708094 waagent[1949]: 2026-01-17T00:06:06.708041Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:06:06.711562 waagent[1949]: 2026-01-17T00:06:06.711522Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:06:06.714882 waagent[1949]: 2026-01-17T00:06:06.714832Z INFO Daemon Daemon Run daemon Jan 17 00:06:06.717996 waagent[1949]: 2026-01-17T00:06:06.717960Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:06:06.725293 waagent[1949]: 2026-01-17T00:06:06.725225Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:06:06.729755 waagent[1949]: 2026-01-17T00:06:06.729712Z INFO Daemon Daemon Activate resource disk Jan 17 00:06:06.733644 waagent[1949]: 2026-01-17T00:06:06.733607Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:06:06.743469 waagent[1949]: 2026-01-17T00:06:06.743420Z INFO Daemon Daemon Found device: None Jan 17 00:06:06.747161 waagent[1949]: 2026-01-17T00:06:06.747114Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:06:06.753813 waagent[1949]: 2026-01-17T00:06:06.753773Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:06:06.764900 waagent[1949]: 2026-01-17T00:06:06.764850Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:06:06.770015 waagent[1949]: 2026-01-17T00:06:06.769970Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:06:06.780703 waagent[1949]: 2026-01-17T00:06:06.780635Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:06:06.791325 waagent[1949]: 2026-01-17T00:06:06.791267Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:06:06.798550 waagent[1949]: 2026-01-17T00:06:06.798502Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:06:06.802363 waagent[1949]: 2026-01-17T00:06:06.802321Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:06:06.893207 waagent[1949]: 2026-01-17T00:06:06.893090Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:06:06.905935 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:06:06.907581 waagent[1949]: 2026-01-17T00:06:06.907524Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:06:06.911756 waagent[1949]: 2026-01-17T00:06:06.911706Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:06:06.916091 waagent[1949]: 2026-01-17T00:06:06.916049Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:06:06.921000 waagent[1949]: 2026-01-17T00:06:06.920962Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:06:06.925240 waagent[1949]: 2026-01-17T00:06:06.925197Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:06:06.929309 waagent[1949]: 2026-01-17T00:06:06.929272Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:06:06.962073 waagent[1949]: 2026-01-17T00:06:06.961970Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:06:06.967223 waagent[1949]: 2026-01-17T00:06:06.967196Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:06:06.971320 waagent[1949]: 2026-01-17T00:06:06.971276Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:06:07.245140 waagent[1949]: 2026-01-17T00:06:07.244984Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:06:07.250243 waagent[1949]: 2026-01-17T00:06:07.250174Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:06:07.258419 waagent[1949]: 2026-01-17T00:06:07.258370Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:06:07.276734 waagent[1949]: 2026-01-17T00:06:07.276691Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:06:07.281296 waagent[1949]: 2026-01-17T00:06:07.281253Z INFO Daemon Jan 17 00:06:07.283491 waagent[1949]: 2026-01-17T00:06:07.283450Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 879b9242-d48d-4d4d-b7a0-abf70bad0f7f eTag: 17388001323766592519 source: Fabric] Jan 17 00:06:07.292266 waagent[1949]: 2026-01-17T00:06:07.292224Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:06:07.297704 waagent[1949]: 2026-01-17T00:06:07.297657Z INFO Daemon Jan 17 00:06:07.299844 waagent[1949]: 2026-01-17T00:06:07.299797Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:06:07.309237 waagent[1949]: 2026-01-17T00:06:07.309205Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:06:07.454816 waagent[1949]: 2026-01-17T00:06:07.454736Z INFO Daemon Downloaded certificate {'thumbprint': '531DAA31404402BB8AF0A0B50E07B1ED4919B511', 'hasPrivateKey': True} Jan 17 00:06:07.462860 waagent[1949]: 2026-01-17T00:06:07.462816Z INFO Daemon Fetch goal state completed Jan 17 00:06:07.473248 waagent[1949]: 2026-01-17T00:06:07.473211Z INFO Daemon Daemon Starting provisioning Jan 17 00:06:07.477341 waagent[1949]: 2026-01-17T00:06:07.477296Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:06:07.481311 waagent[1949]: 2026-01-17T00:06:07.481276Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-93f9562822] Jan 17 00:06:07.507215 waagent[1949]: 2026-01-17T00:06:07.507148Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-93f9562822] Jan 17 00:06:07.512179 waagent[1949]: 2026-01-17T00:06:07.512131Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:06:07.516940 waagent[1949]: 2026-01-17T00:06:07.516900Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:06:07.565796 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:06:07.565802 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:06:07.565828 systemd-networkd[1403]: eth0: DHCP lease lost Jan 17 00:06:07.567014 waagent[1949]: 2026-01-17T00:06:07.566855Z INFO Daemon Daemon Create user account if not exists Jan 17 00:06:07.571279 waagent[1949]: 2026-01-17T00:06:07.571235Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:06:07.576167 systemd-networkd[1403]: eth0: DHCPv6 lease lost Jan 17 00:06:07.579408 waagent[1949]: 2026-01-17T00:06:07.576165Z INFO Daemon Daemon Configure sudoer Jan 17 00:06:07.579948 waagent[1949]: 2026-01-17T00:06:07.579901Z INFO Daemon Daemon Configure sshd Jan 17 00:06:07.583487 waagent[1949]: 2026-01-17T00:06:07.583433Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:06:07.593014 waagent[1949]: 2026-01-17T00:06:07.592973Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:06:07.606240 systemd-networkd[1403]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:06:08.723140 waagent[1949]: 2026-01-17T00:06:08.722938Z INFO Daemon Daemon Provisioning complete Jan 17 00:06:08.739836 waagent[1949]: 2026-01-17T00:06:08.739791Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:06:08.745302 waagent[1949]: 2026-01-17T00:06:08.745253Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:06:08.752584 waagent[1949]: 2026-01-17T00:06:08.752544Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:06:08.876941 waagent[2012]: 2026-01-17T00:06:08.876871Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:06:08.877880 waagent[2012]: 2026-01-17T00:06:08.877404Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:06:08.877880 waagent[2012]: 2026-01-17T00:06:08.877475Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:06:08.931153 waagent[2012]: 2026-01-17T00:06:08.930902Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:06:08.931262 waagent[2012]: 2026-01-17T00:06:08.931152Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:06:08.931262 waagent[2012]: 2026-01-17T00:06:08.931223Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:06:08.939152 waagent[2012]: 2026-01-17T00:06:08.939082Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:06:08.944929 waagent[2012]: 2026-01-17T00:06:08.944893Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:06:08.945417 waagent[2012]: 2026-01-17T00:06:08.945372Z INFO ExtHandler Jan 17 00:06:08.945484 waagent[2012]: 2026-01-17T00:06:08.945458Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b7bb4669-9a14-4768-ac62-b153827ff13a eTag: 17388001323766592519 source: Fabric] Jan 17 00:06:08.945767 waagent[2012]: 2026-01-17T00:06:08.945731Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:06:08.946356 waagent[2012]: 2026-01-17T00:06:08.946313Z INFO ExtHandler Jan 17 00:06:08.946417 waagent[2012]: 2026-01-17T00:06:08.946392Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:06:08.950038 waagent[2012]: 2026-01-17T00:06:08.950008Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:06:09.021550 waagent[2012]: 2026-01-17T00:06:09.021419Z INFO ExtHandler Downloaded certificate {'thumbprint': '531DAA31404402BB8AF0A0B50E07B1ED4919B511', 'hasPrivateKey': True} Jan 17 00:06:09.022010 waagent[2012]: 2026-01-17T00:06:09.021964Z INFO ExtHandler Fetch goal state completed Jan 17 00:06:09.037360 waagent[2012]: 2026-01-17T00:06:09.037309Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2012 Jan 17 00:06:09.037512 waagent[2012]: 2026-01-17T00:06:09.037479Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:06:09.039063 waagent[2012]: 2026-01-17T00:06:09.039021Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:06:09.039427 waagent[2012]: 2026-01-17T00:06:09.039393Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:06:09.060830 waagent[2012]: 2026-01-17T00:06:09.060792Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:06:09.061013 waagent[2012]: 2026-01-17T00:06:09.060975Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:06:09.066756 waagent[2012]: 2026-01-17T00:06:09.066721Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:06:09.073224 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit waagent.service)... Jan 17 00:06:09.073238 systemd[1]: Reloading... Jan 17 00:06:09.155147 zram_generator::config[2077]: No configuration found. Jan 17 00:06:09.239628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:06:09.318581 systemd[1]: Reloading finished in 245 ms. Jan 17 00:06:09.342207 waagent[2012]: 2026-01-17T00:06:09.342089Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:06:09.347280 systemd[1]: Reloading requested from client PID 2120 ('systemctl') (unit waagent.service)... Jan 17 00:06:09.347408 systemd[1]: Reloading... Jan 17 00:06:09.418152 zram_generator::config[2152]: No configuration found. Jan 17 00:06:09.522388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:06:09.596348 systemd[1]: Reloading finished in 248 ms. Jan 17 00:06:09.622497 waagent[2012]: 2026-01-17T00:06:09.622410Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:06:09.622599 waagent[2012]: 2026-01-17T00:06:09.622575Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:06:09.999299 waagent[2012]: 2026-01-17T00:06:09.999215Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:06:09.999859 waagent[2012]: 2026-01-17T00:06:09.999805Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:06:10.000657 waagent[2012]: 2026-01-17T00:06:10.000553Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:06:10.000736 waagent[2012]: 2026-01-17T00:06:10.000687Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:06:10.000853 waagent[2012]: 2026-01-17T00:06:10.000773Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:06:10.001219 waagent[2012]: 2026-01-17T00:06:10.001156Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:06:10.001555 waagent[2012]: 2026-01-17T00:06:10.001519Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:06:10.001555 waagent[2012]: 2026-01-17T00:06:10.001380Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:06:10.001910 waagent[2012]: 2026-01-17T00:06:10.001862Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:06:10.002229 waagent[2012]: 2026-01-17T00:06:10.002180Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:06:10.002229 waagent[2012]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:06:10.002229 waagent[2012]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:06:10.002229 waagent[2012]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:06:10.002229 waagent[2012]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:06:10.002229 waagent[2012]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:06:10.002229 waagent[2012]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:06:10.002789 waagent[2012]: 2026-01-17T00:06:10.002704Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:06:10.002871 waagent[2012]: 2026-01-17T00:06:10.002830Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:06:10.003092 waagent[2012]: 2026-01-17T00:06:10.003046Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:06:10.003511 waagent[2012]: 2026-01-17T00:06:10.003380Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:06:10.003511 waagent[2012]: 2026-01-17T00:06:10.003442Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:06:10.003777 waagent[2012]: 2026-01-17T00:06:10.003737Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:06:10.005222 waagent[2012]: 2026-01-17T00:06:10.005175Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:06:10.006449 waagent[2012]: 2026-01-17T00:06:10.006401Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:06:10.010237 waagent[2012]: 2026-01-17T00:06:10.010191Z INFO ExtHandler ExtHandler Jan 17 00:06:10.010706 waagent[2012]: 2026-01-17T00:06:10.010658Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8507f999-1827-4976-b6bc-bdb42c1f52bf correlation ccd189bd-016f-4687-8941-69552d1579d1 created: 2026-01-17T00:05:07.206299Z] Jan 17 00:06:10.011679 waagent[2012]: 2026-01-17T00:06:10.011638Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:06:10.012344 waagent[2012]: 2026-01-17T00:06:10.012305Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 00:06:10.042841 waagent[2012]: 2026-01-17T00:06:10.042729Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E06F7E42-622D-4512-AFB5-16E6F81AC20D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:06:10.056922 waagent[2012]: 2026-01-17T00:06:10.056539Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:06:10.056922 waagent[2012]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:06:10.056922 waagent[2012]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:06:10.056922 waagent[2012]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:ba:46 brd ff:ff:ff:ff:ff:ff Jan 17 00:06:10.056922 waagent[2012]: 3: enP53370s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:ba:46 brd ff:ff:ff:ff:ff:ff\ altname enP53370p0s2 Jan 17 00:06:10.056922 waagent[2012]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:06:10.056922 waagent[2012]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:06:10.056922 waagent[2012]: 2: eth0 inet 10.200.20.22/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:06:10.056922 waagent[2012]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:06:10.056922 waagent[2012]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:06:10.056922 waagent[2012]: 2: eth0 inet6 fe80::20d:3aff:fe06:ba46/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:06:10.121517 waagent[2012]: 2026-01-17T00:06:10.121401Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:06:10.121517 waagent[2012]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.121517 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.121517 waagent[2012]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.121517 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.121517 waagent[2012]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.121517 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.121517 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:06:10.121517 waagent[2012]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:06:10.121517 waagent[2012]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:06:10.125184 waagent[2012]: 2026-01-17T00:06:10.125099Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:06:10.125184 waagent[2012]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.125184 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.125184 waagent[2012]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.125184 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.125184 waagent[2012]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:06:10.125184 waagent[2012]: pkts bytes target prot opt in out source destination Jan 17 00:06:10.125184 waagent[2012]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:06:10.125184 waagent[2012]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:06:10.125184 waagent[2012]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:06:10.125814 waagent[2012]: 2026-01-17T00:06:10.125703Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:06:15.113803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:06:15.124290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:15.226617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:15.229458 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:15.347312 kubelet[2256]: E0117 00:06:15.347262 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:15.352326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:15.352487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:23.282297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:06:23.289406 systemd[1]: Started sshd@0-10.200.20.22:22-10.200.16.10:51890.service - OpenSSH per-connection server daemon (10.200.16.10:51890). Jan 17 00:06:23.804258 sshd[2263]: Accepted publickey for core from 10.200.16.10 port 51890 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:23.805527 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:23.809164 systemd-logind[1786]: New session 3 of user core. Jan 17 00:06:23.815330 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:06:24.217331 systemd[1]: Started sshd@1-10.200.20.22:22-10.200.16.10:51902.service - OpenSSH per-connection server daemon (10.200.16.10:51902). Jan 17 00:06:24.699797 sshd[2268]: Accepted publickey for core from 10.200.16.10 port 51902 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:24.701447 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:24.705028 systemd-logind[1786]: New session 4 of user core. Jan 17 00:06:24.713423 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:06:25.050320 sshd[2268]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:25.052726 systemd[1]: sshd@1-10.200.20.22:22-10.200.16.10:51902.service: Deactivated successfully. Jan 17 00:06:25.055738 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:06:25.056562 systemd-logind[1786]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:06:25.057590 systemd-logind[1786]: Removed session 4. Jan 17 00:06:25.131420 systemd[1]: Started sshd@2-10.200.20.22:22-10.200.16.10:51906.service - OpenSSH per-connection server daemon (10.200.16.10:51906). Jan 17 00:06:25.363841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:06:25.369271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:25.469435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:25.472074 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:25.575591 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 51906 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:25.577499 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:25.583565 systemd-logind[1786]: New session 5 of user core. Jan 17 00:06:25.588407 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:06:25.611145 kubelet[2290]: E0117 00:06:25.610199 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:25.614244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:25.614399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:25.919643 sshd[2276]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:25.923553 systemd[1]: sshd@2-10.200.20.22:22-10.200.16.10:51906.service: Deactivated successfully. Jan 17 00:06:25.923675 systemd-logind[1786]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:06:25.926177 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:06:25.928010 systemd-logind[1786]: Removed session 5. Jan 17 00:06:25.989333 systemd[1]: Started sshd@3-10.200.20.22:22-10.200.16.10:51910.service - OpenSSH per-connection server daemon (10.200.16.10:51910). Jan 17 00:06:26.436335 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 51910 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:26.437590 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:26.441301 systemd-logind[1786]: New session 6 of user core. Jan 17 00:06:26.448498 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:06:26.772321 sshd[2304]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:26.775352 systemd[1]: sshd@3-10.200.20.22:22-10.200.16.10:51910.service: Deactivated successfully. Jan 17 00:06:26.778101 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:06:26.778895 systemd-logind[1786]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:06:26.779615 systemd-logind[1786]: Removed session 6. Jan 17 00:06:26.855632 systemd[1]: Started sshd@4-10.200.20.22:22-10.200.16.10:51916.service - OpenSSH per-connection server daemon (10.200.16.10:51916). Jan 17 00:06:27.067903 chronyd[1768]: Selected source PHC0 Jan 17 00:06:27.300807 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 51916 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:27.302097 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:27.306306 systemd-logind[1786]: New session 7 of user core. Jan 17 00:06:27.312322 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:06:27.745684 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:06:27.745955 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:27.760195 sudo[2316]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:27.838395 sshd[2312]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:27.842745 systemd-logind[1786]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:06:27.843475 systemd[1]: sshd@4-10.200.20.22:22-10.200.16.10:51916.service: Deactivated successfully. Jan 17 00:06:27.845975 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:06:27.846871 systemd-logind[1786]: Removed session 7. Jan 17 00:06:27.926392 systemd[1]: Started sshd@5-10.200.20.22:22-10.200.16.10:51932.service - OpenSSH per-connection server daemon (10.200.16.10:51932). Jan 17 00:06:28.409497 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 51932 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:28.410828 sshd[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:28.415295 systemd-logind[1786]: New session 8 of user core. Jan 17 00:06:28.422408 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:06:28.685617 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:06:28.685901 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:28.688933 sudo[2326]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:28.692990 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:06:28.693455 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:28.705663 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:06:28.706355 auditctl[2329]: No rules Jan 17 00:06:28.706758 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:06:28.706961 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:06:28.710291 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:06:28.737117 augenrules[2348]: No rules Jan 17 00:06:28.738433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:06:28.741396 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:28.816618 sshd[2321]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:28.819715 systemd[1]: sshd@5-10.200.20.22:22-10.200.16.10:51932.service: Deactivated successfully. Jan 17 00:06:28.822154 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:06:28.822748 systemd-logind[1786]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:06:28.823616 systemd-logind[1786]: Removed session 8. Jan 17 00:06:28.903402 systemd[1]: Started sshd@6-10.200.20.22:22-10.200.16.10:51948.service - OpenSSH per-connection server daemon (10.200.16.10:51948). Jan 17 00:06:29.385405 sshd[2357]: Accepted publickey for core from 10.200.16.10 port 51948 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:29.386698 sshd[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:29.390360 systemd-logind[1786]: New session 9 of user core. Jan 17 00:06:29.400423 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:06:29.660553 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:06:29.661226 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:30.873496 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:06:30.873548 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:06:31.660049 dockerd[2377]: time="2026-01-17T00:06:31.659383567Z" level=info msg="Starting up" Jan 17 00:06:32.409314 dockerd[2377]: time="2026-01-17T00:06:32.409258184Z" level=info msg="Loading containers: start." Jan 17 00:06:32.573146 kernel: Initializing XFRM netlink socket Jan 17 00:06:32.747037 systemd-networkd[1403]: docker0: Link UP Jan 17 00:06:32.769067 dockerd[2377]: time="2026-01-17T00:06:32.769030057Z" level=info msg="Loading containers: done." Jan 17 00:06:32.786617 dockerd[2377]: time="2026-01-17T00:06:32.786576594Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:06:32.786754 dockerd[2377]: time="2026-01-17T00:06:32.786667434Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:06:32.786780 dockerd[2377]: time="2026-01-17T00:06:32.786762714Z" level=info msg="Daemon has completed initialization" Jan 17 00:06:32.853496 dockerd[2377]: time="2026-01-17T00:06:32.853129420Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:06:32.854138 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:06:33.566941 containerd[1828]: time="2026-01-17T00:06:33.566909881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:06:34.357696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634556758.mount: Deactivated successfully. Jan 17 00:06:35.534263 containerd[1828]: time="2026-01-17T00:06:35.534212318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:35.536826 containerd[1828]: time="2026-01-17T00:06:35.536619081Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 17 00:06:35.540209 containerd[1828]: time="2026-01-17T00:06:35.540183724Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:35.547031 containerd[1828]: time="2026-01-17T00:06:35.546667011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:35.547789 containerd[1828]: time="2026-01-17T00:06:35.547758612Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.980811971s" Jan 17 00:06:35.547848 containerd[1828]: time="2026-01-17T00:06:35.547793652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:06:35.548492 containerd[1828]: time="2026-01-17T00:06:35.548469013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:06:35.863896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:06:35.870296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:35.969230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:35.971958 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:36.081552 kubelet[2581]: E0117 00:06:36.081482 2581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:36.085284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:36.085439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:37.357269 containerd[1828]: time="2026-01-17T00:06:37.357220889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:37.359872 containerd[1828]: time="2026-01-17T00:06:37.359841491Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 17 00:06:37.362968 containerd[1828]: time="2026-01-17T00:06:37.362926615Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:37.367687 containerd[1828]: time="2026-01-17T00:06:37.367644420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:37.368806 containerd[1828]: time="2026-01-17T00:06:37.368699021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.820199488s" Jan 17 00:06:37.368806 containerd[1828]: time="2026-01-17T00:06:37.368728701Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:06:37.369572 containerd[1828]: time="2026-01-17T00:06:37.369550782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:06:38.566639 containerd[1828]: time="2026-01-17T00:06:38.566590223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:38.569475 containerd[1828]: time="2026-01-17T00:06:38.569211546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 17 00:06:38.573138 containerd[1828]: time="2026-01-17T00:06:38.573082870Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:38.579173 containerd[1828]: time="2026-01-17T00:06:38.579113916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:38.580416 containerd[1828]: time="2026-01-17T00:06:38.580091837Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.210433735s" Jan 17 00:06:38.580416 containerd[1828]: time="2026-01-17T00:06:38.580136997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:06:38.581168 containerd[1828]: time="2026-01-17T00:06:38.581144798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:06:39.627441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606954805.mount: Deactivated successfully. Jan 17 00:06:39.931091 containerd[1828]: time="2026-01-17T00:06:39.930961038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:39.934607 containerd[1828]: time="2026-01-17T00:06:39.934575002Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 17 00:06:39.938044 containerd[1828]: time="2026-01-17T00:06:39.937997005Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:39.943676 containerd[1828]: time="2026-01-17T00:06:39.943630131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:39.944744 containerd[1828]: time="2026-01-17T00:06:39.944357052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.363180294s" Jan 17 00:06:39.944744 containerd[1828]: time="2026-01-17T00:06:39.944391132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:06:39.944913 containerd[1828]: time="2026-01-17T00:06:39.944884772Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:06:40.617666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176299728.mount: Deactivated successfully. Jan 17 00:06:41.972154 containerd[1828]: time="2026-01-17T00:06:41.971603554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:41.974196 containerd[1828]: time="2026-01-17T00:06:41.974169397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 17 00:06:41.977299 containerd[1828]: time="2026-01-17T00:06:41.977256920Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:41.982386 containerd[1828]: time="2026-01-17T00:06:41.982343885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:41.984222 containerd[1828]: time="2026-01-17T00:06:41.983442326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.038524594s" Jan 17 00:06:41.984222 containerd[1828]: time="2026-01-17T00:06:41.983475006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:06:41.984222 containerd[1828]: time="2026-01-17T00:06:41.983929887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:06:42.552110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859552978.mount: Deactivated successfully. Jan 17 00:06:42.572150 containerd[1828]: time="2026-01-17T00:06:42.571984697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:42.575266 containerd[1828]: time="2026-01-17T00:06:42.575229740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 00:06:42.578371 containerd[1828]: time="2026-01-17T00:06:42.578325623Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:42.582490 containerd[1828]: time="2026-01-17T00:06:42.582446507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:42.583302 containerd[1828]: time="2026-01-17T00:06:42.583170548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 599.215181ms" Jan 17 00:06:42.583302 containerd[1828]: time="2026-01-17T00:06:42.583201308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:06:42.583940 containerd[1828]: time="2026-01-17T00:06:42.583773869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:06:43.226050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037807957.mount: Deactivated successfully. Jan 17 00:06:44.603130 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 00:06:46.114179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:06:46.121315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:46.235775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:46.238941 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:46.371898 kubelet[2726]: E0117 00:06:46.371728 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:46.377312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:46.377644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:46.826742 containerd[1828]: time="2026-01-17T00:06:46.826642543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:46.829175 containerd[1828]: time="2026-01-17T00:06:46.829144306Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 17 00:06:46.832232 containerd[1828]: time="2026-01-17T00:06:46.832192629Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:46.837189 containerd[1828]: time="2026-01-17T00:06:46.837133034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:46.839146 containerd[1828]: time="2026-01-17T00:06:46.838237715Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.254432926s" Jan 17 00:06:46.839146 containerd[1828]: time="2026-01-17T00:06:46.838270875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:06:49.166611 update_engine[1789]: I20260117 00:06:49.166528 1789 update_attempter.cc:509] Updating boot flags... Jan 17 00:06:49.377958 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2766) Jan 17 00:06:52.578581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:52.589557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:52.624384 systemd[1]: Reloading requested from client PID 2801 ('systemctl') (unit session-9.scope)... Jan 17 00:06:52.624400 systemd[1]: Reloading... Jan 17 00:06:52.729233 zram_generator::config[2848]: No configuration found. Jan 17 00:06:52.815710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:06:52.893761 systemd[1]: Reloading finished in 269 ms. Jan 17 00:06:52.939878 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:06:52.941302 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:52.941568 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:06:52.941790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:52.945481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:53.159828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:53.162893 (kubelet)[2924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:06:53.194858 kubelet[2924]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:06:53.194858 kubelet[2924]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:06:53.194858 kubelet[2924]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:06:53.195388 kubelet[2924]: I0117 00:06:53.195342 2924 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:06:53.955313 kubelet[2924]: I0117 00:06:53.955279 2924 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:06:53.955313 kubelet[2924]: I0117 00:06:53.955307 2924 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:06:53.955591 kubelet[2924]: I0117 00:06:53.955578 2924 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:06:53.980150 kubelet[2924]: E0117 00:06:53.979858 2924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:53.980150 kubelet[2924]: I0117 00:06:53.980061 2924 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:06:53.986906 kubelet[2924]: E0117 00:06:53.986872 2924 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:06:53.987093 kubelet[2924]: I0117 00:06:53.987079 2924 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:06:53.990040 kubelet[2924]: I0117 00:06:53.990019 2924 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:06:53.991023 kubelet[2924]: I0117 00:06:53.990514 2924 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:06:53.991023 kubelet[2924]: I0117 00:06:53.990543 2924 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-93f9562822","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:06:53.991023 kubelet[2924]: I0117 00:06:53.990742 2924 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:06:53.991023 kubelet[2924]: I0117 00:06:53.990752 2924 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:06:53.991223 kubelet[2924]: I0117 00:06:53.990882 2924 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:53.999527 kubelet[2924]: I0117 00:06:53.999508 2924 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:06:53.999641 kubelet[2924]: I0117 00:06:53.999632 2924 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:06:53.999705 kubelet[2924]: I0117 00:06:53.999698 2924 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:06:53.999767 kubelet[2924]: I0117 00:06:53.999758 2924 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:06:54.001214 kubelet[2924]: W0117 00:06:54.001161 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-93f9562822&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:54.001281 kubelet[2924]: E0117 00:06:54.001228 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-93f9562822&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:54.002041 kubelet[2924]: I0117 00:06:54.002023 2924 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:06:54.002606 kubelet[2924]: I0117 00:06:54.002585 2924 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:06:54.002728 kubelet[2924]: W0117 00:06:54.002717 2924 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:06:54.003766 kubelet[2924]: I0117 00:06:54.003314 2924 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:06:54.003766 kubelet[2924]: I0117 00:06:54.003348 2924 server.go:1287] "Started kubelet" Jan 17 00:06:54.003766 kubelet[2924]: W0117 00:06:54.003452 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:54.003766 kubelet[2924]: E0117 00:06:54.003481 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:54.008051 kubelet[2924]: I0117 00:06:54.008029 2924 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:06:54.009660 kubelet[2924]: I0117 00:06:54.009621 2924 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:06:54.010426 kubelet[2924]: I0117 00:06:54.010403 2924 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:06:54.011316 kubelet[2924]: I0117 00:06:54.011265 2924 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:06:54.011492 kubelet[2924]: I0117 00:06:54.011474 2924 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:06:54.011781 kubelet[2924]: I0117 00:06:54.011748 2924 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:06:54.012481 kubelet[2924]: E0117 00:06:54.012114 2924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-93f9562822.188b5bfb676f7e62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-93f9562822,UID:ci-4081.3.6-n-93f9562822,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-93f9562822,},FirstTimestamp:2026-01-17 00:06:54.003330658 +0000 UTC m=+0.837439230,LastTimestamp:2026-01-17 00:06:54.003330658 +0000 UTC m=+0.837439230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-93f9562822,}" Jan 17 00:06:54.014224 kubelet[2924]: I0117 00:06:54.013938 2924 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:06:54.014224 kubelet[2924]: I0117 00:06:54.014033 2924 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:06:54.014224 kubelet[2924]: I0117 00:06:54.014095 2924 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:06:54.015048 kubelet[2924]: W0117 00:06:54.014891 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:54.015048 kubelet[2924]: E0117 00:06:54.014935 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:54.016006 kubelet[2924]: I0117 00:06:54.015301 2924 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:06:54.016006 kubelet[2924]: I0117 00:06:54.015378 2924 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:06:54.016006 kubelet[2924]: E0117 00:06:54.015831 2924 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-93f9562822\" not found" Jan 17 00:06:54.016006 kubelet[2924]: E0117 00:06:54.015898 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-93f9562822?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="200ms" Jan 17 00:06:54.017009 kubelet[2924]: E0117 00:06:54.016992 2924 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:06:54.017277 kubelet[2924]: I0117 00:06:54.017262 2924 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:06:54.044146 kubelet[2924]: I0117 00:06:54.044087 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:06:54.045336 kubelet[2924]: I0117 00:06:54.045311 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:06:54.045336 kubelet[2924]: I0117 00:06:54.045335 2924 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:06:54.045435 kubelet[2924]: I0117 00:06:54.045356 2924 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:06:54.045435 kubelet[2924]: I0117 00:06:54.045362 2924 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:06:54.045435 kubelet[2924]: E0117 00:06:54.045398 2924 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:06:54.050339 kubelet[2924]: W0117 00:06:54.050296 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:54.050418 kubelet[2924]: E0117 00:06:54.050350 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:54.051807 kubelet[2924]: I0117 00:06:54.051785 2924 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:06:54.051883 kubelet[2924]: I0117 00:06:54.051821 2924 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:06:54.051883 kubelet[2924]: I0117 00:06:54.051840 2924 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:54.058270 kubelet[2924]: I0117 00:06:54.058247 2924 policy_none.go:49] "None policy: Start" Jan 17 00:06:54.058270 kubelet[2924]: I0117 00:06:54.058272 2924 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:06:54.058365 kubelet[2924]: I0117 00:06:54.058295 2924 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:06:54.068553 kubelet[2924]: I0117 00:06:54.067657 2924 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:06:54.068553 kubelet[2924]: I0117 00:06:54.067845 2924 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:06:54.068553 kubelet[2924]: I0117 00:06:54.067855 2924 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:06:54.068553 kubelet[2924]: I0117 00:06:54.068538 2924 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:06:54.069421 kubelet[2924]: E0117 00:06:54.069397 2924 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:06:54.069519 kubelet[2924]: E0117 00:06:54.069448 2924 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-93f9562822\" not found" Jan 17 00:06:54.151797 kubelet[2924]: E0117 00:06:54.151104 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.155235 kubelet[2924]: E0117 00:06:54.155209 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.157874 kubelet[2924]: E0117 00:06:54.157847 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.170454 kubelet[2924]: I0117 00:06:54.170431 2924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.170801 kubelet[2924]: E0117 00:06:54.170779 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.216443 kubelet[2924]: E0117 00:06:54.216361 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-93f9562822?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="400ms" Jan 17 00:06:54.316015 kubelet[2924]: I0117 00:06:54.315755 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316015 kubelet[2924]: I0117 00:06:54.315791 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316015 kubelet[2924]: I0117 00:06:54.315821 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316015 kubelet[2924]: I0117 00:06:54.315836 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316015 kubelet[2924]: I0117 00:06:54.315857 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316247 kubelet[2924]: I0117 00:06:54.315878 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316247 kubelet[2924]: I0117 00:06:54.315893 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316247 kubelet[2924]: I0117 00:06:54.315907 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.316247 kubelet[2924]: I0117 00:06:54.315923 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a5e0aa5b0a74e04bd271ecce4828d04-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-93f9562822\" (UID: \"3a5e0aa5b0a74e04bd271ecce4828d04\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.372419 kubelet[2924]: I0117 00:06:54.372392 2924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.372721 kubelet[2924]: E0117 00:06:54.372698 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.453680 containerd[1828]: time="2026-01-17T00:06:54.453412390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-93f9562822,Uid:99acd27d8d3922a52239a16a7f436482,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:54.456818 containerd[1828]: time="2026-01-17T00:06:54.456687632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-93f9562822,Uid:f069accbd145c6e4dfdaa498d52831d5,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:54.459227 containerd[1828]: time="2026-01-17T00:06:54.459200394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-93f9562822,Uid:3a5e0aa5b0a74e04bd271ecce4828d04,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:54.617664 kubelet[2924]: E0117 00:06:54.617628 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-93f9562822?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="800ms" Jan 17 00:06:54.775044 kubelet[2924]: I0117 00:06:54.774759 2924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:54.775404 kubelet[2924]: E0117 00:06:54.775381 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:55.016684 kubelet[2924]: W0117 00:06:55.016579 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:55.016684 kubelet[2924]: E0117 00:06:55.016622 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:55.021602 kubelet[2924]: W0117 00:06:55.021532 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:55.021602 kubelet[2924]: E0117 00:06:55.021579 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:55.054115 kubelet[2924]: W0117 00:06:55.054068 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-93f9562822&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:55.054200 kubelet[2924]: E0117 00:06:55.054143 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-93f9562822&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:55.079269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527198006.mount: Deactivated successfully. Jan 17 00:06:55.096685 containerd[1828]: time="2026-01-17T00:06:55.096646231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:55.105420 containerd[1828]: time="2026-01-17T00:06:55.105381796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:06:55.108419 containerd[1828]: time="2026-01-17T00:06:55.108373597Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:55.114202 containerd[1828]: time="2026-01-17T00:06:55.113582640Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:55.116210 containerd[1828]: time="2026-01-17T00:06:55.116179602Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:55.119324 containerd[1828]: time="2026-01-17T00:06:55.119050963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:06:55.122543 containerd[1828]: time="2026-01-17T00:06:55.122508365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:06:55.125390 containerd[1828]: time="2026-01-17T00:06:55.125351487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:55.126402 containerd[1828]: time="2026-01-17T00:06:55.126155087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 669.399575ms" Jan 17 00:06:55.130064 containerd[1828]: time="2026-01-17T00:06:55.130022650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 676.53774ms" Jan 17 00:06:55.152352 containerd[1828]: time="2026-01-17T00:06:55.152153862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 692.896828ms" Jan 17 00:06:55.238296 kubelet[2924]: W0117 00:06:55.238204 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 17 00:06:55.238296 kubelet[2924]: E0117 00:06:55.238263 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:06:55.418787 kubelet[2924]: E0117 00:06:55.418748 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-93f9562822?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="1.6s" Jan 17 00:06:55.577507 kubelet[2924]: I0117 00:06:55.577479 2924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:55.578200 kubelet[2924]: E0117 00:06:55.578170 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:55.783205 containerd[1828]: time="2026-01-17T00:06:55.783026656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:55.783998 containerd[1828]: time="2026-01-17T00:06:55.783080176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:55.783998 containerd[1828]: time="2026-01-17T00:06:55.783416296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.784854 containerd[1828]: time="2026-01-17T00:06:55.783974856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.786863 containerd[1828]: time="2026-01-17T00:06:55.786650418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:55.786863 containerd[1828]: time="2026-01-17T00:06:55.786692138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:55.786863 containerd[1828]: time="2026-01-17T00:06:55.786702538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.786863 containerd[1828]: time="2026-01-17T00:06:55.786770778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.791966 containerd[1828]: time="2026-01-17T00:06:55.789495579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:55.791966 containerd[1828]: time="2026-01-17T00:06:55.789594259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:55.791966 containerd[1828]: time="2026-01-17T00:06:55.789623619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.791966 containerd[1828]: time="2026-01-17T00:06:55.789775299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:55.854216 containerd[1828]: time="2026-01-17T00:06:55.854175316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-93f9562822,Uid:f069accbd145c6e4dfdaa498d52831d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6db6faddc599432676a3e029ed9781e9cce93bdcc54b76be0e5d9e550a91fd2\"" Jan 17 00:06:55.859187 containerd[1828]: time="2026-01-17T00:06:55.859150401Z" level=info msg="CreateContainer within sandbox \"a6db6faddc599432676a3e029ed9781e9cce93bdcc54b76be0e5d9e550a91fd2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:06:55.865899 containerd[1828]: time="2026-01-17T00:06:55.865634647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-93f9562822,Uid:99acd27d8d3922a52239a16a7f436482,Namespace:kube-system,Attempt:0,} returns sandbox id \"23268242fc9a433b26a993a237fb82b3e4e4f7deb9285e1d0237699ac501b03e\"" Jan 17 00:06:55.870222 containerd[1828]: time="2026-01-17T00:06:55.870113971Z" level=info msg="CreateContainer within sandbox \"23268242fc9a433b26a993a237fb82b3e4e4f7deb9285e1d0237699ac501b03e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:06:55.870551 containerd[1828]: time="2026-01-17T00:06:55.870522732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-93f9562822,Uid:3a5e0aa5b0a74e04bd271ecce4828d04,Namespace:kube-system,Attempt:0,} returns sandbox id \"9669492489d6cb44e131436bbb745cd62b8d9fc9eab80b8b8241457905f09b60\"" Jan 17 00:06:55.872950 containerd[1828]: time="2026-01-17T00:06:55.872774774Z" level=info msg="CreateContainer within sandbox \"9669492489d6cb44e131436bbb745cd62b8d9fc9eab80b8b8241457905f09b60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:06:55.918583 containerd[1828]: time="2026-01-17T00:06:55.918544218Z" level=info msg="CreateContainer within sandbox \"a6db6faddc599432676a3e029ed9781e9cce93bdcc54b76be0e5d9e550a91fd2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a113c41da5623c51a75b28cda95ad5aef98d3e5b0549497e382915cd3c6a8f3\"" Jan 17 00:06:55.920319 containerd[1828]: time="2026-01-17T00:06:55.919240259Z" level=info msg="StartContainer for \"0a113c41da5623c51a75b28cda95ad5aef98d3e5b0549497e382915cd3c6a8f3\"" Jan 17 00:06:55.930636 containerd[1828]: time="2026-01-17T00:06:55.930597750Z" level=info msg="CreateContainer within sandbox \"23268242fc9a433b26a993a237fb82b3e4e4f7deb9285e1d0237699ac501b03e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c2d839d107ff34335a386e3207d8afc5eb4a190f42adf7c618fba0ead048cd46\"" Jan 17 00:06:55.931351 containerd[1828]: time="2026-01-17T00:06:55.931331391Z" level=info msg="StartContainer for \"c2d839d107ff34335a386e3207d8afc5eb4a190f42adf7c618fba0ead048cd46\"" Jan 17 00:06:55.940827 containerd[1828]: time="2026-01-17T00:06:55.940682360Z" level=info msg="CreateContainer within sandbox \"9669492489d6cb44e131436bbb745cd62b8d9fc9eab80b8b8241457905f09b60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"407ba50df42cd3d51c8e595ab0a428fa321dcdfffa7297b1e1daa113233fa721\"" Jan 17 00:06:55.941440 containerd[1828]: time="2026-01-17T00:06:55.941410321Z" level=info msg="StartContainer for \"407ba50df42cd3d51c8e595ab0a428fa321dcdfffa7297b1e1daa113233fa721\"" Jan 17 00:06:56.026850 containerd[1828]: time="2026-01-17T00:06:56.026802603Z" level=info msg="StartContainer for \"0a113c41da5623c51a75b28cda95ad5aef98d3e5b0549497e382915cd3c6a8f3\" returns successfully" Jan 17 00:06:56.026968 containerd[1828]: time="2026-01-17T00:06:56.026822804Z" level=info msg="StartContainer for \"c2d839d107ff34335a386e3207d8afc5eb4a190f42adf7c618fba0ead048cd46\" returns successfully" Jan 17 00:06:56.026968 containerd[1828]: time="2026-01-17T00:06:56.026828884Z" level=info msg="StartContainer for \"407ba50df42cd3d51c8e595ab0a428fa321dcdfffa7297b1e1daa113233fa721\" returns successfully" Jan 17 00:06:56.063439 kubelet[2924]: E0117 00:06:56.063344 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:56.069317 kubelet[2924]: E0117 00:06:56.069289 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:56.079834 kubelet[2924]: E0117 00:06:56.079807 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:57.078150 kubelet[2924]: E0117 00:06:57.076136 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:57.078907 kubelet[2924]: E0117 00:06:57.078783 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:57.182238 kubelet[2924]: I0117 00:06:57.181610 2924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.085646 kubelet[2924]: E0117 00:06:58.085613 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.189950 kubelet[2924]: E0117 00:06:58.189913 2924 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-93f9562822\" not found" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.263268 kubelet[2924]: E0117 00:06:58.262176 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-93f9562822.188b5bfb676f7e62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-93f9562822,UID:ci-4081.3.6-n-93f9562822,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-93f9562822,},FirstTimestamp:2026-01-17 00:06:54.003330658 +0000 UTC m=+0.837439230,LastTimestamp:2026-01-17 00:06:54.003330658 +0000 UTC m=+0.837439230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-93f9562822,}" Jan 17 00:06:58.321708 kubelet[2924]: I0117 00:06:58.321457 2924 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.339163 kubelet[2924]: E0117 00:06:58.338414 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-93f9562822.188b5bfb67b90d7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-93f9562822,UID:ci-4081.3.6-n-93f9562822,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-93f9562822,},FirstTimestamp:2026-01-17 00:06:54.008151421 +0000 UTC m=+0.842259993,LastTimestamp:2026-01-17 00:06:54.008151421 +0000 UTC m=+0.842259993,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-93f9562822,}" Jan 17 00:06:58.416886 kubelet[2924]: I0117 00:06:58.416693 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.436048 kubelet[2924]: E0117 00:06:58.435975 2924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-93f9562822\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.436048 kubelet[2924]: I0117 00:06:58.436003 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.439826 kubelet[2924]: E0117 00:06:58.439452 2924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-93f9562822\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.439826 kubelet[2924]: I0117 00:06:58.439476 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:58.443528 kubelet[2924]: E0117 00:06:58.443497 2924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:06:59.009797 kubelet[2924]: I0117 00:06:59.009762 2924 apiserver.go:52] "Watching apiserver" Jan 17 00:06:59.014363 kubelet[2924]: I0117 00:06:59.014318 2924 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:07:00.711041 systemd[1]: Reloading requested from client PID 3197 ('systemctl') (unit session-9.scope)... Jan 17 00:07:00.711053 systemd[1]: Reloading... Jan 17 00:07:00.812217 zram_generator::config[3238]: No configuration found. Jan 17 00:07:00.941708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:07:01.024384 systemd[1]: Reloading finished in 313 ms. Jan 17 00:07:01.055690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:07:01.075063 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:07:01.075419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:07:01.081737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:07:01.185285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:07:01.189661 (kubelet)[3313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:07:01.231511 kubelet[3313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:07:01.232141 kubelet[3313]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:07:01.232141 kubelet[3313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:07:01.232141 kubelet[3313]: I0117 00:07:01.231926 3313 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:07:01.237549 kubelet[3313]: I0117 00:07:01.237527 3313 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:07:01.237653 kubelet[3313]: I0117 00:07:01.237644 3313 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:07:01.237967 kubelet[3313]: I0117 00:07:01.237954 3313 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:07:01.240388 kubelet[3313]: I0117 00:07:01.240362 3313 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:07:01.281604 kubelet[3313]: I0117 00:07:01.281511 3313 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:07:01.285301 kubelet[3313]: E0117 00:07:01.285183 3313 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:07:01.285301 kubelet[3313]: I0117 00:07:01.285219 3313 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:07:01.291419 kubelet[3313]: I0117 00:07:01.291399 3313 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:07:01.291839 kubelet[3313]: I0117 00:07:01.291805 3313 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:07:01.291995 kubelet[3313]: I0117 00:07:01.291834 3313 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-93f9562822","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:07:01.292061 kubelet[3313]: I0117 00:07:01.292005 3313 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:07:01.292061 kubelet[3313]: I0117 00:07:01.292014 3313 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:07:01.292061 kubelet[3313]: I0117 00:07:01.292058 3313 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:07:01.292186 kubelet[3313]: I0117 00:07:01.292176 3313 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:07:01.292223 kubelet[3313]: I0117 00:07:01.292191 3313 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:07:01.292223 kubelet[3313]: I0117 00:07:01.292209 3313 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:07:01.292223 kubelet[3313]: I0117 00:07:01.292219 3313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:07:01.295795 kubelet[3313]: I0117 00:07:01.295713 3313 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:07:01.298270 kubelet[3313]: I0117 00:07:01.298247 3313 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:07:01.298802 kubelet[3313]: I0117 00:07:01.298776 3313 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:07:01.299312 kubelet[3313]: I0117 00:07:01.298941 3313 server.go:1287] "Started kubelet" Jan 17 00:07:01.307450 kubelet[3313]: I0117 00:07:01.307398 3313 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:07:01.310008 kubelet[3313]: I0117 00:07:01.309914 3313 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:07:01.316269 kubelet[3313]: I0117 00:07:01.314228 3313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:07:01.316269 kubelet[3313]: I0117 00:07:01.314441 3313 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:07:01.327115 kubelet[3313]: I0117 00:07:01.326724 3313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:07:01.327440 kubelet[3313]: I0117 00:07:01.327422 3313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:07:01.329323 kubelet[3313]: I0117 00:07:01.329308 3313 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:07:01.329642 kubelet[3313]: E0117 00:07:01.329587 3313 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-93f9562822\" not found" Jan 17 00:07:01.330896 kubelet[3313]: I0117 00:07:01.330874 3313 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:07:01.332854 kubelet[3313]: I0117 00:07:01.331281 3313 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:07:01.343929 kubelet[3313]: E0117 00:07:01.343904 3313 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:07:01.345417 kubelet[3313]: I0117 00:07:01.345398 3313 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:07:01.345505 kubelet[3313]: I0117 00:07:01.345497 3313 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:07:01.345626 kubelet[3313]: I0117 00:07:01.345609 3313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:07:01.352778 kubelet[3313]: I0117 00:07:01.352751 3313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:07:01.353695 kubelet[3313]: I0117 00:07:01.353678 3313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:07:01.353781 kubelet[3313]: I0117 00:07:01.353772 3313 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:07:01.353842 kubelet[3313]: I0117 00:07:01.353834 3313 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:07:01.353883 kubelet[3313]: I0117 00:07:01.353876 3313 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:07:01.353983 kubelet[3313]: E0117 00:07:01.353968 3313 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:07:01.395065 kubelet[3313]: I0117 00:07:01.395045 3313 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:07:01.395221 kubelet[3313]: I0117 00:07:01.395210 3313 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:07:01.395295 kubelet[3313]: I0117 00:07:01.395288 3313 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:07:01.395490 kubelet[3313]: I0117 00:07:01.395477 3313 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:07:01.395556 kubelet[3313]: I0117 00:07:01.395535 3313 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:07:01.395604 kubelet[3313]: I0117 00:07:01.395597 3313 policy_none.go:49] "None policy: Start" Jan 17 00:07:01.395661 kubelet[3313]: I0117 00:07:01.395653 3313 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:07:01.395706 kubelet[3313]: I0117 00:07:01.395699 3313 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:07:01.395849 kubelet[3313]: I0117 00:07:01.395840 3313 state_mem.go:75] "Updated machine memory state" Jan 17 00:07:01.397137 kubelet[3313]: I0117 00:07:01.396873 3313 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:07:01.397137 kubelet[3313]: I0117 00:07:01.397016 3313 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:07:01.397137 kubelet[3313]: I0117 00:07:01.397026 3313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:07:01.398181 kubelet[3313]: I0117 00:07:01.398168 3313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:07:01.399701 kubelet[3313]: E0117 00:07:01.399685 3313 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:07:01.455512 kubelet[3313]: I0117 00:07:01.455473 3313 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.455913 kubelet[3313]: I0117 00:07:01.455880 3313 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.456357 kubelet[3313]: I0117 00:07:01.456268 3313 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.471429 kubelet[3313]: W0117 00:07:01.470905 3313 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:07:01.471429 kubelet[3313]: W0117 00:07:01.471140 3313 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:07:01.471429 kubelet[3313]: W0117 00:07:01.471225 3313 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:07:01.500225 kubelet[3313]: I0117 00:07:01.500205 3313 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.519950 kubelet[3313]: I0117 00:07:01.519757 3313 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.520363 kubelet[3313]: I0117 00:07:01.520152 3313 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633568 kubelet[3313]: I0117 00:07:01.633459 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633568 kubelet[3313]: I0117 00:07:01.633504 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633568 kubelet[3313]: I0117 00:07:01.633524 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633568 kubelet[3313]: I0117 00:07:01.633540 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633568 kubelet[3313]: I0117 00:07:01.633556 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633797 kubelet[3313]: I0117 00:07:01.633573 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f069accbd145c6e4dfdaa498d52831d5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-93f9562822\" (UID: \"f069accbd145c6e4dfdaa498d52831d5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633797 kubelet[3313]: I0117 00:07:01.633590 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a5e0aa5b0a74e04bd271ecce4828d04-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-93f9562822\" (UID: \"3a5e0aa5b0a74e04bd271ecce4828d04\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633797 kubelet[3313]: I0117 00:07:01.633607 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.633797 kubelet[3313]: I0117 00:07:01.633622 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99acd27d8d3922a52239a16a7f436482-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-93f9562822\" (UID: \"99acd27d8d3922a52239a16a7f436482\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" Jan 17 00:07:01.753748 sudo[3345]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:07:01.754014 sudo[3345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:07:02.202904 sudo[3345]: pam_unix(sudo:session): session closed for user root Jan 17 00:07:02.292948 kubelet[3313]: I0117 00:07:02.292908 3313 apiserver.go:52] "Watching apiserver" Jan 17 00:07:02.333652 kubelet[3313]: I0117 00:07:02.333620 3313 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:07:02.370363 kubelet[3313]: I0117 00:07:02.369909 3313 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:07:02.377536 kubelet[3313]: W0117 00:07:02.377337 3313 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:07:02.377536 kubelet[3313]: E0117 00:07:02.377386 3313 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-93f9562822\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" Jan 17 00:07:02.396381 kubelet[3313]: I0117 00:07:02.396246 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-93f9562822" podStartSLOduration=1.396234787 podStartE2EDuration="1.396234787s" podCreationTimestamp="2026-01-17 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:02.394917986 +0000 UTC m=+1.201296720" watchObservedRunningTime="2026-01-17 00:07:02.396234787 +0000 UTC m=+1.202613521" Jan 17 00:07:02.408492 kubelet[3313]: I0117 00:07:02.408240 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-93f9562822" podStartSLOduration=1.408227558 podStartE2EDuration="1.408227558s" podCreationTimestamp="2026-01-17 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:02.407535837 +0000 UTC m=+1.213914571" watchObservedRunningTime="2026-01-17 00:07:02.408227558 +0000 UTC m=+1.214606292" Jan 17 00:07:02.428495 kubelet[3313]: I0117 00:07:02.428294 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-93f9562822" podStartSLOduration=1.428276936 podStartE2EDuration="1.428276936s" podCreationTimestamp="2026-01-17 00:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:02.427665975 +0000 UTC m=+1.234044709" watchObservedRunningTime="2026-01-17 00:07:02.428276936 +0000 UTC m=+1.234655670" Jan 17 00:07:05.447893 kubelet[3313]: I0117 00:07:05.447798 3313 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:07:05.448357 containerd[1828]: time="2026-01-17T00:07:05.448190169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:07:05.448553 kubelet[3313]: I0117 00:07:05.448389 3313 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:07:05.570290 sudo[2361]: pam_unix(sudo:session): session closed for user root Jan 17 00:07:05.645169 sshd[2357]: pam_unix(sshd:session): session closed for user core Jan 17 00:07:05.648781 systemd[1]: sshd@6-10.200.20.22:22-10.200.16.10:51948.service: Deactivated successfully. Jan 17 00:07:05.648782 systemd-logind[1786]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:07:05.651677 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:07:05.655585 systemd-logind[1786]: Removed session 9. Jan 17 00:07:06.260891 kubelet[3313]: I0117 00:07:06.260785 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afeb84fb-1764-4dcd-9230-304363dbc24d-xtables-lock\") pod \"kube-proxy-tk9qf\" (UID: \"afeb84fb-1764-4dcd-9230-304363dbc24d\") " pod="kube-system/kube-proxy-tk9qf" Jan 17 00:07:06.260891 kubelet[3313]: I0117 00:07:06.260821 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afeb84fb-1764-4dcd-9230-304363dbc24d-lib-modules\") pod \"kube-proxy-tk9qf\" (UID: \"afeb84fb-1764-4dcd-9230-304363dbc24d\") " pod="kube-system/kube-proxy-tk9qf" Jan 17 00:07:06.260891 kubelet[3313]: I0117 00:07:06.260838 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwtv\" (UniqueName: \"kubernetes.io/projected/afeb84fb-1764-4dcd-9230-304363dbc24d-kube-api-access-xmwtv\") pod \"kube-proxy-tk9qf\" (UID: \"afeb84fb-1764-4dcd-9230-304363dbc24d\") " pod="kube-system/kube-proxy-tk9qf" Jan 17 00:07:06.260891 kubelet[3313]: I0117 00:07:06.260856 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afeb84fb-1764-4dcd-9230-304363dbc24d-kube-proxy\") pod \"kube-proxy-tk9qf\" (UID: \"afeb84fb-1764-4dcd-9230-304363dbc24d\") " pod="kube-system/kube-proxy-tk9qf" Jan 17 00:07:06.361205 kubelet[3313]: I0117 00:07:06.361171 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-hubble-tls\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361207 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-bpf-maps\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361244 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-xtables-lock\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361261 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2356051a-97fb-43c1-816f-8d504e798ca2-clustermesh-secrets\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361287 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-hostproc\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361304 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-etc-cni-netd\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361334 kubelet[3313]: I0117 00:07:06.361328 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-config-path\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361346 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-run\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361360 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-net\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361382 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-lib-modules\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361400 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9h9b\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-kube-api-access-c9h9b\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361432 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-cgroup\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361465 kubelet[3313]: I0117 00:07:06.361448 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cni-path\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.361582 kubelet[3313]: I0117 00:07:06.361464 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-kernel\") pod \"cilium-vblzn\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " pod="kube-system/cilium-vblzn" Jan 17 00:07:06.562668 containerd[1828]: time="2026-01-17T00:07:06.562274770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tk9qf,Uid:afeb84fb-1764-4dcd-9230-304363dbc24d,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:06.563533 kubelet[3313]: I0117 00:07:06.563500 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4275n\" (UniqueName: \"kubernetes.io/projected/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-kube-api-access-4275n\") pod \"cilium-operator-6c4d7847fc-6d6gp\" (UID: \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\") " pod="kube-system/cilium-operator-6c4d7847fc-6d6gp" Jan 17 00:07:06.563757 kubelet[3313]: I0117 00:07:06.563543 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6d6gp\" (UID: \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\") " pod="kube-system/cilium-operator-6c4d7847fc-6d6gp" Jan 17 00:07:06.567975 containerd[1828]: time="2026-01-17T00:07:06.567941535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vblzn,Uid:2356051a-97fb-43c1-816f-8d504e798ca2,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:06.608469 containerd[1828]: time="2026-01-17T00:07:06.607859251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:06.608469 containerd[1828]: time="2026-01-17T00:07:06.607913531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:06.608469 containerd[1828]: time="2026-01-17T00:07:06.607925451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.608469 containerd[1828]: time="2026-01-17T00:07:06.608006451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.629985 containerd[1828]: time="2026-01-17T00:07:06.628962150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:06.629985 containerd[1828]: time="2026-01-17T00:07:06.629074110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:06.629985 containerd[1828]: time="2026-01-17T00:07:06.629136110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.629985 containerd[1828]: time="2026-01-17T00:07:06.629347590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.667462 containerd[1828]: time="2026-01-17T00:07:06.667208624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tk9qf,Uid:afeb84fb-1764-4dcd-9230-304363dbc24d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cbe8eeeb81eba268efe88eb384a9ae9d50378bb8b2aead736fbdffa9786f9c4\"" Jan 17 00:07:06.674518 containerd[1828]: time="2026-01-17T00:07:06.673605190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vblzn,Uid:2356051a-97fb-43c1-816f-8d504e798ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\"" Jan 17 00:07:06.677771 containerd[1828]: time="2026-01-17T00:07:06.677646114Z" level=info msg="CreateContainer within sandbox \"2cbe8eeeb81eba268efe88eb384a9ae9d50378bb8b2aead736fbdffa9786f9c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:07:06.682000 containerd[1828]: time="2026-01-17T00:07:06.681663237Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:07:06.740346 containerd[1828]: time="2026-01-17T00:07:06.740304330Z" level=info msg="CreateContainer within sandbox \"2cbe8eeeb81eba268efe88eb384a9ae9d50378bb8b2aead736fbdffa9786f9c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7254765aaafff0e7edb3a8f740949640c346f0673716e1ada3ceb5449534750d\"" Jan 17 00:07:06.741109 containerd[1828]: time="2026-01-17T00:07:06.741081251Z" level=info msg="StartContainer for \"7254765aaafff0e7edb3a8f740949640c346f0673716e1ada3ceb5449534750d\"" Jan 17 00:07:06.792692 containerd[1828]: time="2026-01-17T00:07:06.792580977Z" level=info msg="StartContainer for \"7254765aaafff0e7edb3a8f740949640c346f0673716e1ada3ceb5449534750d\" returns successfully" Jan 17 00:07:06.820110 containerd[1828]: time="2026-01-17T00:07:06.820009322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6d6gp,Uid:2a9e5056-a64d-4a85-b2bf-927dfd0eb505,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:06.865737 containerd[1828]: time="2026-01-17T00:07:06.865251242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:06.865737 containerd[1828]: time="2026-01-17T00:07:06.865313842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:06.865737 containerd[1828]: time="2026-01-17T00:07:06.865325962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.865737 containerd[1828]: time="2026-01-17T00:07:06.865407763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:06.922519 containerd[1828]: time="2026-01-17T00:07:06.922346414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6d6gp,Uid:2a9e5056-a64d-4a85-b2bf-927dfd0eb505,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\"" Jan 17 00:07:09.342531 kubelet[3313]: I0117 00:07:09.341557 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tk9qf" podStartSLOduration=3.341540888 podStartE2EDuration="3.341540888s" podCreationTimestamp="2026-01-17 00:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:07.402694693 +0000 UTC m=+6.209073427" watchObservedRunningTime="2026-01-17 00:07:09.341540888 +0000 UTC m=+8.147919622" Jan 17 00:07:12.033377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307121192.mount: Deactivated successfully. Jan 17 00:07:14.130920 containerd[1828]: time="2026-01-17T00:07:14.130869321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:14.134165 containerd[1828]: time="2026-01-17T00:07:14.133977484Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 17 00:07:14.138651 containerd[1828]: time="2026-01-17T00:07:14.138606728Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:14.140278 containerd[1828]: time="2026-01-17T00:07:14.140156729Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.458143971s" Jan 17 00:07:14.140278 containerd[1828]: time="2026-01-17T00:07:14.140192929Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 00:07:14.141896 containerd[1828]: time="2026-01-17T00:07:14.141475451Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:07:14.143378 containerd[1828]: time="2026-01-17T00:07:14.143338212Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:07:14.190087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242908164.mount: Deactivated successfully. Jan 17 00:07:14.202443 containerd[1828]: time="2026-01-17T00:07:14.202387387Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\"" Jan 17 00:07:14.202929 containerd[1828]: time="2026-01-17T00:07:14.202900427Z" level=info msg="StartContainer for \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\"" Jan 17 00:07:14.254960 containerd[1828]: time="2026-01-17T00:07:14.254879235Z" level=info msg="StartContainer for \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\" returns successfully" Jan 17 00:07:15.187768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde-rootfs.mount: Deactivated successfully. Jan 17 00:07:16.130553 containerd[1828]: time="2026-01-17T00:07:16.130495154Z" level=info msg="shim disconnected" id=0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde namespace=k8s.io Jan 17 00:07:16.130553 containerd[1828]: time="2026-01-17T00:07:16.130547914Z" level=warning msg="cleaning up after shim disconnected" id=0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde namespace=k8s.io Jan 17 00:07:16.130553 containerd[1828]: time="2026-01-17T00:07:16.130557714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:16.412165 containerd[1828]: time="2026-01-17T00:07:16.412047501Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:07:16.437043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188371589.mount: Deactivated successfully. Jan 17 00:07:16.444487 containerd[1828]: time="2026-01-17T00:07:16.444441932Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\"" Jan 17 00:07:16.445067 containerd[1828]: time="2026-01-17T00:07:16.444897012Z" level=info msg="StartContainer for \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\"" Jan 17 00:07:16.500426 containerd[1828]: time="2026-01-17T00:07:16.500228785Z" level=info msg="StartContainer for \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\" returns successfully" Jan 17 00:07:16.506411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:07:16.506703 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:07:16.506764 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:07:16.512178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:07:16.532304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:07:16.536939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da-rootfs.mount: Deactivated successfully. Jan 17 00:07:16.548325 containerd[1828]: time="2026-01-17T00:07:16.548266150Z" level=info msg="shim disconnected" id=e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da namespace=k8s.io Jan 17 00:07:16.548433 containerd[1828]: time="2026-01-17T00:07:16.548326150Z" level=warning msg="cleaning up after shim disconnected" id=e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da namespace=k8s.io Jan 17 00:07:16.548433 containerd[1828]: time="2026-01-17T00:07:16.548336190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:17.417294 containerd[1828]: time="2026-01-17T00:07:17.417023094Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:07:17.458057 containerd[1828]: time="2026-01-17T00:07:17.458012293Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\"" Jan 17 00:07:17.458729 containerd[1828]: time="2026-01-17T00:07:17.458656733Z" level=info msg="StartContainer for \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\"" Jan 17 00:07:17.490554 systemd[1]: run-containerd-runc-k8s.io-c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d-runc.wutRqk.mount: Deactivated successfully. Jan 17 00:07:17.527456 containerd[1828]: time="2026-01-17T00:07:17.527216718Z" level=info msg="StartContainer for \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\" returns successfully" Jan 17 00:07:17.574220 containerd[1828]: time="2026-01-17T00:07:17.573853363Z" level=info msg="shim disconnected" id=c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d namespace=k8s.io Jan 17 00:07:17.574220 containerd[1828]: time="2026-01-17T00:07:17.573901923Z" level=warning msg="cleaning up after shim disconnected" id=c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d namespace=k8s.io Jan 17 00:07:17.574220 containerd[1828]: time="2026-01-17T00:07:17.573909843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:18.156998 containerd[1828]: time="2026-01-17T00:07:18.156265235Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:18.159042 containerd[1828]: time="2026-01-17T00:07:18.159014397Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 17 00:07:18.164140 containerd[1828]: time="2026-01-17T00:07:18.162567721Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:18.164140 containerd[1828]: time="2026-01-17T00:07:18.163861122Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.022346671s" Jan 17 00:07:18.164140 containerd[1828]: time="2026-01-17T00:07:18.163891082Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 00:07:18.167369 containerd[1828]: time="2026-01-17T00:07:18.167343605Z" level=info msg="CreateContainer within sandbox \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:07:18.199728 containerd[1828]: time="2026-01-17T00:07:18.199679276Z" level=info msg="CreateContainer within sandbox \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\"" Jan 17 00:07:18.200343 containerd[1828]: time="2026-01-17T00:07:18.200306956Z" level=info msg="StartContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\"" Jan 17 00:07:18.250263 containerd[1828]: time="2026-01-17T00:07:18.250176044Z" level=info msg="StartContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" returns successfully" Jan 17 00:07:18.424830 containerd[1828]: time="2026-01-17T00:07:18.424607569Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:07:18.445761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d-rootfs.mount: Deactivated successfully. Jan 17 00:07:18.470329 kubelet[3313]: I0117 00:07:18.470266 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6d6gp" podStartSLOduration=1.228788944 podStartE2EDuration="12.470246372s" podCreationTimestamp="2026-01-17 00:07:06 +0000 UTC" firstStartedPulling="2026-01-17 00:07:06.923428095 +0000 UTC m=+5.729806829" lastFinishedPulling="2026-01-17 00:07:18.164885523 +0000 UTC m=+16.971264257" observedRunningTime="2026-01-17 00:07:18.465323528 +0000 UTC m=+17.271702262" watchObservedRunningTime="2026-01-17 00:07:18.470246372 +0000 UTC m=+17.276625106" Jan 17 00:07:18.493614 containerd[1828]: time="2026-01-17T00:07:18.493024034Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\"" Jan 17 00:07:18.494348 containerd[1828]: time="2026-01-17T00:07:18.494263755Z" level=info msg="StartContainer for \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\"" Jan 17 00:07:18.570559 containerd[1828]: time="2026-01-17T00:07:18.568789066Z" level=info msg="StartContainer for \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\" returns successfully" Jan 17 00:07:18.931147 containerd[1828]: time="2026-01-17T00:07:18.930929249Z" level=info msg="shim disconnected" id=603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716 namespace=k8s.io Jan 17 00:07:18.931147 containerd[1828]: time="2026-01-17T00:07:18.930980249Z" level=warning msg="cleaning up after shim disconnected" id=603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716 namespace=k8s.io Jan 17 00:07:18.931147 containerd[1828]: time="2026-01-17T00:07:18.930990169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:19.426244 containerd[1828]: time="2026-01-17T00:07:19.426076679Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:07:19.443871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716-rootfs.mount: Deactivated successfully. Jan 17 00:07:19.462343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271360891.mount: Deactivated successfully. Jan 17 00:07:19.473235 containerd[1828]: time="2026-01-17T00:07:19.473193403Z" level=info msg="CreateContainer within sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\"" Jan 17 00:07:19.474513 containerd[1828]: time="2026-01-17T00:07:19.473653444Z" level=info msg="StartContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\"" Jan 17 00:07:19.525336 containerd[1828]: time="2026-01-17T00:07:19.525297133Z" level=info msg="StartContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" returns successfully" Jan 17 00:07:19.701083 kubelet[3313]: I0117 00:07:19.700712 3313 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:07:19.847546 kubelet[3313]: I0117 00:07:19.847383 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f441f013-7e03-4797-98f8-4b9558f88d2d-config-volume\") pod \"coredns-668d6bf9bc-8sqwz\" (UID: \"f441f013-7e03-4797-98f8-4b9558f88d2d\") " pod="kube-system/coredns-668d6bf9bc-8sqwz" Jan 17 00:07:19.847546 kubelet[3313]: I0117 00:07:19.847425 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjl4\" (UniqueName: \"kubernetes.io/projected/b2711b50-c672-42d4-bc22-fd86e861acbb-kube-api-access-wqjl4\") pod \"coredns-668d6bf9bc-vf6sd\" (UID: \"b2711b50-c672-42d4-bc22-fd86e861acbb\") " pod="kube-system/coredns-668d6bf9bc-vf6sd" Jan 17 00:07:19.847546 kubelet[3313]: I0117 00:07:19.847447 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnf4n\" (UniqueName: \"kubernetes.io/projected/f441f013-7e03-4797-98f8-4b9558f88d2d-kube-api-access-hnf4n\") pod \"coredns-668d6bf9bc-8sqwz\" (UID: \"f441f013-7e03-4797-98f8-4b9558f88d2d\") " pod="kube-system/coredns-668d6bf9bc-8sqwz" Jan 17 00:07:19.847546 kubelet[3313]: I0117 00:07:19.847462 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2711b50-c672-42d4-bc22-fd86e861acbb-config-volume\") pod \"coredns-668d6bf9bc-vf6sd\" (UID: \"b2711b50-c672-42d4-bc22-fd86e861acbb\") " pod="kube-system/coredns-668d6bf9bc-vf6sd" Jan 17 00:07:20.050476 containerd[1828]: time="2026-01-17T00:07:20.049972230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sqwz,Uid:f441f013-7e03-4797-98f8-4b9558f88d2d,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:20.053362 containerd[1828]: time="2026-01-17T00:07:20.053329353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6sd,Uid:b2711b50-c672-42d4-bc22-fd86e861acbb,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:20.451734 kubelet[3313]: I0117 00:07:20.451632 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vblzn" podStartSLOduration=6.989624196 podStartE2EDuration="14.451616451s" podCreationTimestamp="2026-01-17 00:07:06 +0000 UTC" firstStartedPulling="2026-01-17 00:07:06.679354915 +0000 UTC m=+5.485733649" lastFinishedPulling="2026-01-17 00:07:14.14134717 +0000 UTC m=+12.947725904" observedRunningTime="2026-01-17 00:07:20.449426969 +0000 UTC m=+19.255805703" watchObservedRunningTime="2026-01-17 00:07:20.451616451 +0000 UTC m=+19.257995185" Jan 17 00:07:22.502326 systemd-networkd[1403]: cilium_host: Link UP Jan 17 00:07:22.502444 systemd-networkd[1403]: cilium_net: Link UP Jan 17 00:07:22.503868 systemd-networkd[1403]: cilium_net: Gained carrier Jan 17 00:07:22.503995 systemd-networkd[1403]: cilium_host: Gained carrier Jan 17 00:07:22.504075 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jan 17 00:07:22.504239 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jan 17 00:07:22.682904 systemd-networkd[1403]: cilium_vxlan: Link UP Jan 17 00:07:22.682910 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jan 17 00:07:22.954150 kernel: NET: Registered PF_ALG protocol family Jan 17 00:07:23.839210 systemd-networkd[1403]: lxc_health: Link UP Jan 17 00:07:23.851259 systemd-networkd[1403]: lxc_health: Gained carrier Jan 17 00:07:24.124400 systemd-networkd[1403]: lxc507e2d0a3b69: Link UP Jan 17 00:07:24.141167 kernel: eth0: renamed from tmpb40c4 Jan 17 00:07:24.146021 systemd-networkd[1403]: lxc507e2d0a3b69: Gained carrier Jan 17 00:07:24.154309 systemd-networkd[1403]: lxc4b514cb9243a: Link UP Jan 17 00:07:24.168188 kernel: eth0: renamed from tmp24c42 Jan 17 00:07:24.171794 systemd-networkd[1403]: lxc4b514cb9243a: Gained carrier Jan 17 00:07:24.421400 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jan 17 00:07:25.508333 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 17 00:07:25.892318 systemd-networkd[1403]: lxc4b514cb9243a: Gained IPv6LL Jan 17 00:07:26.148289 systemd-networkd[1403]: lxc507e2d0a3b69: Gained IPv6LL Jan 17 00:07:27.690074 containerd[1828]: time="2026-01-17T00:07:27.689881907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:27.690074 containerd[1828]: time="2026-01-17T00:07:27.689940627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:27.690074 containerd[1828]: time="2026-01-17T00:07:27.689963147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:27.691091 containerd[1828]: time="2026-01-17T00:07:27.690627308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:27.730167 containerd[1828]: time="2026-01-17T00:07:27.727417477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:27.730167 containerd[1828]: time="2026-01-17T00:07:27.727462157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:27.730167 containerd[1828]: time="2026-01-17T00:07:27.727480357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:27.730167 containerd[1828]: time="2026-01-17T00:07:27.727558598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:27.758269 containerd[1828]: time="2026-01-17T00:07:27.758168999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6sd,Uid:b2711b50-c672-42d4-bc22-fd86e861acbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40c47fa0c8ecf0aa7809a7329f19cb27a8b204facd907e1038d46db60b4965b\"" Jan 17 00:07:27.763875 containerd[1828]: time="2026-01-17T00:07:27.763700126Z" level=info msg="CreateContainer within sandbox \"b40c47fa0c8ecf0aa7809a7329f19cb27a8b204facd907e1038d46db60b4965b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:07:27.805392 containerd[1828]: time="2026-01-17T00:07:27.805327223Z" level=info msg="CreateContainer within sandbox \"b40c47fa0c8ecf0aa7809a7329f19cb27a8b204facd907e1038d46db60b4965b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19b468868e83e90b4fdee5fb16c910b9d71436a8a69051746388c980db7ed24a\"" Jan 17 00:07:27.806334 containerd[1828]: time="2026-01-17T00:07:27.806306784Z" level=info msg="StartContainer for \"19b468868e83e90b4fdee5fb16c910b9d71436a8a69051746388c980db7ed24a\"" Jan 17 00:07:27.823421 containerd[1828]: time="2026-01-17T00:07:27.823360447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8sqwz,Uid:f441f013-7e03-4797-98f8-4b9558f88d2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c42a9677b293c12fd8cddb8fe1f195e8c03967c89e99dc1b413d156a3bc9fe\"" Jan 17 00:07:27.827546 containerd[1828]: time="2026-01-17T00:07:27.827393412Z" level=info msg="CreateContainer within sandbox \"24c42a9677b293c12fd8cddb8fe1f195e8c03967c89e99dc1b413d156a3bc9fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:07:27.864675 containerd[1828]: time="2026-01-17T00:07:27.864542223Z" level=info msg="CreateContainer within sandbox \"24c42a9677b293c12fd8cddb8fe1f195e8c03967c89e99dc1b413d156a3bc9fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a058178fe19c67eed813ec9f6971229f8398521a5e71a5d53bf19743385208b\"" Jan 17 00:07:27.864675 containerd[1828]: time="2026-01-17T00:07:27.864558863Z" level=info msg="StartContainer for \"19b468868e83e90b4fdee5fb16c910b9d71436a8a69051746388c980db7ed24a\" returns successfully" Jan 17 00:07:27.864675 containerd[1828]: time="2026-01-17T00:07:27.865309184Z" level=info msg="StartContainer for \"6a058178fe19c67eed813ec9f6971229f8398521a5e71a5d53bf19743385208b\"" Jan 17 00:07:27.936248 containerd[1828]: time="2026-01-17T00:07:27.936205399Z" level=info msg="StartContainer for \"6a058178fe19c67eed813ec9f6971229f8398521a5e71a5d53bf19743385208b\" returns successfully" Jan 17 00:07:28.461130 kubelet[3313]: I0117 00:07:28.460882 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vf6sd" podStartSLOduration=22.460864788 podStartE2EDuration="22.460864788s" podCreationTimestamp="2026-01-17 00:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:28.458887026 +0000 UTC m=+27.265265760" watchObservedRunningTime="2026-01-17 00:07:28.460864788 +0000 UTC m=+27.267243522" Jan 17 00:07:28.500702 kubelet[3313]: I0117 00:07:28.500452 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8sqwz" podStartSLOduration=22.500434362 podStartE2EDuration="22.500434362s" podCreationTimestamp="2026-01-17 00:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:28.478512452 +0000 UTC m=+27.284891186" watchObservedRunningTime="2026-01-17 00:07:28.500434362 +0000 UTC m=+27.306813096" Jan 17 00:08:35.845391 systemd[1]: Started sshd@7-10.200.20.22:22-10.200.16.10:36652.service - OpenSSH per-connection server daemon (10.200.16.10:36652). Jan 17 00:08:36.290470 sshd[4686]: Accepted publickey for core from 10.200.16.10 port 36652 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:36.291810 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:36.296014 systemd-logind[1786]: New session 10 of user core. Jan 17 00:08:36.304364 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:08:36.710045 sshd[4686]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:36.714103 systemd[1]: sshd@7-10.200.20.22:22-10.200.16.10:36652.service: Deactivated successfully. Jan 17 00:08:36.718493 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:08:36.719378 systemd-logind[1786]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:08:36.720172 systemd-logind[1786]: Removed session 10. Jan 17 00:08:41.790326 systemd[1]: Started sshd@8-10.200.20.22:22-10.200.16.10:60142.service - OpenSSH per-connection server daemon (10.200.16.10:60142). Jan 17 00:08:42.242017 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 60142 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:42.243379 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:42.247136 systemd-logind[1786]: New session 11 of user core. Jan 17 00:08:42.252422 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:08:42.632951 sshd[4703]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:42.635563 systemd-logind[1786]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:08:42.635804 systemd[1]: sshd@8-10.200.20.22:22-10.200.16.10:60142.service: Deactivated successfully. Jan 17 00:08:42.639492 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:08:42.641884 systemd-logind[1786]: Removed session 11. Jan 17 00:08:47.716341 systemd[1]: Started sshd@9-10.200.20.22:22-10.200.16.10:60152.service - OpenSSH per-connection server daemon (10.200.16.10:60152). Jan 17 00:08:48.162423 sshd[4717]: Accepted publickey for core from 10.200.16.10 port 60152 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:48.163740 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:48.168693 systemd-logind[1786]: New session 12 of user core. Jan 17 00:08:48.174363 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:08:48.554335 sshd[4717]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:48.557590 systemd[1]: sshd@9-10.200.20.22:22-10.200.16.10:60152.service: Deactivated successfully. Jan 17 00:08:48.560779 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:08:48.560895 systemd-logind[1786]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:08:48.562625 systemd-logind[1786]: Removed session 12. Jan 17 00:08:53.635372 systemd[1]: Started sshd@10-10.200.20.22:22-10.200.16.10:35880.service - OpenSSH per-connection server daemon (10.200.16.10:35880). Jan 17 00:08:54.080591 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 35880 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:54.081726 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:54.085683 systemd-logind[1786]: New session 13 of user core. Jan 17 00:08:54.091367 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:08:54.472080 sshd[4732]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:54.475445 systemd[1]: sshd@10-10.200.20.22:22-10.200.16.10:35880.service: Deactivated successfully. Jan 17 00:08:54.478225 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:08:54.479035 systemd-logind[1786]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:08:54.480070 systemd-logind[1786]: Removed session 13. Jan 17 00:08:54.548341 systemd[1]: Started sshd@11-10.200.20.22:22-10.200.16.10:35882.service - OpenSSH per-connection server daemon (10.200.16.10:35882). Jan 17 00:08:54.997578 sshd[4746]: Accepted publickey for core from 10.200.16.10 port 35882 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:54.998845 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:55.002611 systemd-logind[1786]: New session 14 of user core. Jan 17 00:08:55.011413 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:08:55.150687 update_engine[1789]: I20260117 00:08:55.150233 1789 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:08:55.150687 update_engine[1789]: I20260117 00:08:55.150295 1789 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:08:55.150687 update_engine[1789]: I20260117 00:08:55.150481 1789 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.150856 1789 omaha_request_params.cc:62] Current group set to lts Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.150944 1789 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.150954 1789 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.150969 1789 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.150997 1789 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.151042 1789 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.151049 1789 omaha_request_action.cc:272] Request: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: Jan 17 00:08:55.151349 update_engine[1789]: I20260117 00:08:55.151055 1789 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:08:55.151629 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:08:55.152033 update_engine[1789]: I20260117 00:08:55.152006 1789 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:08:55.152315 update_engine[1789]: I20260117 00:08:55.152288 1789 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:08:55.180422 update_engine[1789]: E20260117 00:08:55.180368 1789 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:08:55.180540 update_engine[1789]: I20260117 00:08:55.180474 1789 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:08:55.424065 sshd[4746]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:55.426794 systemd-logind[1786]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:08:55.427660 systemd[1]: sshd@11-10.200.20.22:22-10.200.16.10:35882.service: Deactivated successfully. Jan 17 00:08:55.431962 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:08:55.434996 systemd-logind[1786]: Removed session 14. Jan 17 00:08:55.516323 systemd[1]: Started sshd@12-10.200.20.22:22-10.200.16.10:35894.service - OpenSSH per-connection server daemon (10.200.16.10:35894). Jan 17 00:08:55.965017 sshd[4757]: Accepted publickey for core from 10.200.16.10 port 35894 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:55.966373 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:55.970162 systemd-logind[1786]: New session 15 of user core. Jan 17 00:08:55.976349 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:08:56.353948 sshd[4757]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:56.357349 systemd[1]: sshd@12-10.200.20.22:22-10.200.16.10:35894.service: Deactivated successfully. Jan 17 00:08:56.360549 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:08:56.361564 systemd-logind[1786]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:08:56.362327 systemd-logind[1786]: Removed session 15. Jan 17 00:09:01.425343 systemd[1]: Started sshd@13-10.200.20.22:22-10.200.16.10:51394.service - OpenSSH per-connection server daemon (10.200.16.10:51394). Jan 17 00:09:01.869037 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 51394 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:01.870346 sshd[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:01.875442 systemd-logind[1786]: New session 16 of user core. Jan 17 00:09:01.880367 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:09:02.252145 sshd[4773]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:02.255649 systemd[1]: sshd@13-10.200.20.22:22-10.200.16.10:51394.service: Deactivated successfully. Jan 17 00:09:02.258800 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:09:02.259944 systemd-logind[1786]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:09:02.261093 systemd-logind[1786]: Removed session 16. Jan 17 00:09:04.433961 waagent[2012]: 2026-01-17T00:09:04.433170Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:09:04.441148 waagent[2012]: 2026-01-17T00:09:04.440408Z INFO ExtHandler Jan 17 00:09:04.441148 waagent[2012]: 2026-01-17T00:09:04.440512Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ec2fe8bd-f3b5-41ae-9734-a2343a44366f eTag: 18007008571781598533 source: Fabric] Jan 17 00:09:04.441148 waagent[2012]: 2026-01-17T00:09:04.440828Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:09:04.441460 waagent[2012]: 2026-01-17T00:09:04.441411Z INFO ExtHandler Jan 17 00:09:04.441521 waagent[2012]: 2026-01-17T00:09:04.441493Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:09:04.510845 waagent[2012]: 2026-01-17T00:09:04.510801Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:09:04.581758 waagent[2012]: 2026-01-17T00:09:04.581668Z INFO ExtHandler Downloaded certificate {'thumbprint': '531DAA31404402BB8AF0A0B50E07B1ED4919B511', 'hasPrivateKey': True} Jan 17 00:09:04.582259 waagent[2012]: 2026-01-17T00:09:04.582215Z INFO ExtHandler Fetch goal state completed Jan 17 00:09:04.582602 waagent[2012]: 2026-01-17T00:09:04.582566Z INFO ExtHandler ExtHandler Jan 17 00:09:04.582668 waagent[2012]: 2026-01-17T00:09:04.582641Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: fd541280-d97c-4fef-8756-a03438abb21d correlation ccd189bd-016f-4687-8941-69552d1579d1 created: 2026-01-17T00:08:56.410240Z] Jan 17 00:09:04.582945 waagent[2012]: 2026-01-17T00:09:04.582910Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:09:04.583467 waagent[2012]: 2026-01-17T00:09:04.583432Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 17 00:09:05.149931 update_engine[1789]: I20260117 00:09:05.149430 1789 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:09:05.149931 update_engine[1789]: I20260117 00:09:05.149652 1789 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:09:05.149931 update_engine[1789]: I20260117 00:09:05.149859 1789 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:09:05.251572 update_engine[1789]: E20260117 00:09:05.251452 1789 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:09:05.251572 update_engine[1789]: I20260117 00:09:05.251535 1789 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:09:07.346335 systemd[1]: Started sshd@14-10.200.20.22:22-10.200.16.10:51400.service - OpenSSH per-connection server daemon (10.200.16.10:51400). Jan 17 00:09:07.827707 sshd[4793]: Accepted publickey for core from 10.200.16.10 port 51400 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:07.829095 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:07.833406 systemd-logind[1786]: New session 17 of user core. Jan 17 00:09:07.838399 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:09:08.236337 sshd[4793]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:08.239366 systemd[1]: sshd@14-10.200.20.22:22-10.200.16.10:51400.service: Deactivated successfully. Jan 17 00:09:08.242674 systemd-logind[1786]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:09:08.243199 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:09:08.244351 systemd-logind[1786]: Removed session 17. Jan 17 00:09:08.302343 systemd[1]: Started sshd@15-10.200.20.22:22-10.200.16.10:51402.service - OpenSSH per-connection server daemon (10.200.16.10:51402). Jan 17 00:09:08.712967 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 51402 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:08.714302 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:08.718420 systemd-logind[1786]: New session 18 of user core. Jan 17 00:09:08.728394 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:09:09.129191 sshd[4807]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:09.132264 systemd[1]: sshd@15-10.200.20.22:22-10.200.16.10:51402.service: Deactivated successfully. Jan 17 00:09:09.135773 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:09:09.137021 systemd-logind[1786]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:09:09.138055 systemd-logind[1786]: Removed session 18. Jan 17 00:09:09.202346 systemd[1]: Started sshd@16-10.200.20.22:22-10.200.16.10:51416.service - OpenSSH per-connection server daemon (10.200.16.10:51416). Jan 17 00:09:09.608717 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 51416 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:09.612235 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:09.616210 systemd-logind[1786]: New session 19 of user core. Jan 17 00:09:09.620453 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:09:10.507327 sshd[4819]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:10.510717 systemd[1]: sshd@16-10.200.20.22:22-10.200.16.10:51416.service: Deactivated successfully. Jan 17 00:09:10.515176 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:09:10.517284 systemd-logind[1786]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:09:10.518771 systemd-logind[1786]: Removed session 19. Jan 17 00:09:10.610511 systemd[1]: Started sshd@17-10.200.20.22:22-10.200.16.10:57430.service - OpenSSH per-connection server daemon (10.200.16.10:57430). Jan 17 00:09:11.099368 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 57430 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:11.100644 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:11.104195 systemd-logind[1786]: New session 20 of user core. Jan 17 00:09:11.109453 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:09:11.611408 sshd[4838]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:11.615251 systemd[1]: sshd@17-10.200.20.22:22-10.200.16.10:57430.service: Deactivated successfully. Jan 17 00:09:11.617839 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:09:11.618770 systemd-logind[1786]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:09:11.620006 systemd-logind[1786]: Removed session 20. Jan 17 00:09:11.699352 systemd[1]: Started sshd@18-10.200.20.22:22-10.200.16.10:57446.service - OpenSSH per-connection server daemon (10.200.16.10:57446). Jan 17 00:09:12.179714 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 57446 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:12.181001 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:12.185785 systemd-logind[1786]: New session 21 of user core. Jan 17 00:09:12.198352 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:09:12.579166 sshd[4850]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:12.583456 systemd[1]: sshd@18-10.200.20.22:22-10.200.16.10:57446.service: Deactivated successfully. Jan 17 00:09:12.585643 systemd-logind[1786]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:09:12.586078 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:09:12.587858 systemd-logind[1786]: Removed session 21. Jan 17 00:09:15.151697 update_engine[1789]: I20260117 00:09:15.151633 1789 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:09:15.152063 update_engine[1789]: I20260117 00:09:15.151846 1789 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:09:15.152089 update_engine[1789]: I20260117 00:09:15.152057 1789 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:09:15.192242 update_engine[1789]: E20260117 00:09:15.192179 1789 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:09:15.192399 update_engine[1789]: I20260117 00:09:15.192330 1789 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:09:17.667345 systemd[1]: Started sshd@19-10.200.20.22:22-10.200.16.10:57462.service - OpenSSH per-connection server daemon (10.200.16.10:57462). Jan 17 00:09:18.149503 sshd[4867]: Accepted publickey for core from 10.200.16.10 port 57462 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:18.150727 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:18.154445 systemd-logind[1786]: New session 22 of user core. Jan 17 00:09:18.160338 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:09:18.556224 sshd[4867]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:18.558850 systemd-logind[1786]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:09:18.559205 systemd[1]: sshd@19-10.200.20.22:22-10.200.16.10:57462.service: Deactivated successfully. Jan 17 00:09:18.562894 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:09:18.564089 systemd-logind[1786]: Removed session 22. Jan 17 00:09:23.633411 systemd[1]: Started sshd@20-10.200.20.22:22-10.200.16.10:54252.service - OpenSSH per-connection server daemon (10.200.16.10:54252). Jan 17 00:09:24.079078 sshd[4880]: Accepted publickey for core from 10.200.16.10 port 54252 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:24.080314 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:24.084392 systemd-logind[1786]: New session 23 of user core. Jan 17 00:09:24.088349 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:09:24.466295 sshd[4880]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:24.469322 systemd[1]: sshd@20-10.200.20.22:22-10.200.16.10:54252.service: Deactivated successfully. Jan 17 00:09:24.469464 systemd-logind[1786]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:09:24.473073 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:09:24.475741 systemd-logind[1786]: Removed session 23. Jan 17 00:09:25.150612 update_engine[1789]: I20260117 00:09:25.150527 1789 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:09:25.151010 update_engine[1789]: I20260117 00:09:25.150801 1789 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:09:25.151036 update_engine[1789]: I20260117 00:09:25.151008 1789 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:09:25.160115 update_engine[1789]: E20260117 00:09:25.160089 1789 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:09:25.160176 update_engine[1789]: I20260117 00:09:25.160145 1789 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:09:25.160176 update_engine[1789]: I20260117 00:09:25.160156 1789 omaha_request_action.cc:617] Omaha request response: Jan 17 00:09:25.160245 update_engine[1789]: E20260117 00:09:25.160228 1789 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:09:25.160270 update_engine[1789]: I20260117 00:09:25.160248 1789 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:09:25.160270 update_engine[1789]: I20260117 00:09:25.160254 1789 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:09:25.160270 update_engine[1789]: I20260117 00:09:25.160259 1789 update_attempter.cc:306] Processing Done. Jan 17 00:09:25.160327 update_engine[1789]: E20260117 00:09:25.160272 1789 update_attempter.cc:619] Update failed. Jan 17 00:09:25.160327 update_engine[1789]: I20260117 00:09:25.160279 1789 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:09:25.160327 update_engine[1789]: I20260117 00:09:25.160282 1789 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:09:25.160327 update_engine[1789]: I20260117 00:09:25.160287 1789 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:09:25.160410 update_engine[1789]: I20260117 00:09:25.160348 1789 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:09:25.160410 update_engine[1789]: I20260117 00:09:25.160368 1789 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:09:25.160410 update_engine[1789]: I20260117 00:09:25.160374 1789 omaha_request_action.cc:272] Request: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: Jan 17 00:09:25.160410 update_engine[1789]: I20260117 00:09:25.160380 1789 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:09:25.160581 update_engine[1789]: I20260117 00:09:25.160488 1789 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:09:25.160859 update_engine[1789]: I20260117 00:09:25.160621 1789 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:09:25.160912 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:09:25.441752 update_engine[1789]: E20260117 00:09:25.441598 1789 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441706 1789 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441721 1789 omaha_request_action.cc:617] Omaha request response: Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441733 1789 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441742 1789 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441749 1789 update_attempter.cc:306] Processing Done. Jan 17 00:09:25.441752 update_engine[1789]: I20260117 00:09:25.441756 1789 update_attempter.cc:310] Error event sent. Jan 17 00:09:25.442722 update_engine[1789]: I20260117 00:09:25.441767 1789 update_check_scheduler.cc:74] Next update check in 45m25s Jan 17 00:09:25.442750 locksmithd[1873]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:09:29.533501 systemd[1]: Started sshd@21-10.200.20.22:22-10.200.16.10:58458.service - OpenSSH per-connection server daemon (10.200.16.10:58458). Jan 17 00:09:29.940572 sshd[4894]: Accepted publickey for core from 10.200.16.10 port 58458 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:29.941879 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:29.945579 systemd-logind[1786]: New session 24 of user core. Jan 17 00:09:29.949511 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:09:30.301598 sshd[4894]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:30.305588 systemd[1]: sshd@21-10.200.20.22:22-10.200.16.10:58458.service: Deactivated successfully. Jan 17 00:09:30.308515 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:09:30.308527 systemd-logind[1786]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:09:30.310365 systemd-logind[1786]: Removed session 24. Jan 17 00:09:30.393319 systemd[1]: Started sshd@22-10.200.20.22:22-10.200.16.10:58462.service - OpenSSH per-connection server daemon (10.200.16.10:58462). Jan 17 00:09:30.877368 sshd[4907]: Accepted publickey for core from 10.200.16.10 port 58462 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:30.878802 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:30.882314 systemd-logind[1786]: New session 25 of user core. Jan 17 00:09:30.890366 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:09:33.132745 containerd[1828]: time="2026-01-17T00:09:33.132605611Z" level=info msg="StopContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" with timeout 30 (s)" Jan 17 00:09:33.135141 containerd[1828]: time="2026-01-17T00:09:33.134273012Z" level=info msg="Stop container \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" with signal terminated" Jan 17 00:09:33.157110 containerd[1828]: time="2026-01-17T00:09:33.156097111Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:09:33.166512 containerd[1828]: time="2026-01-17T00:09:33.166473960Z" level=info msg="StopContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" with timeout 2 (s)" Jan 17 00:09:33.167018 containerd[1828]: time="2026-01-17T00:09:33.166749680Z" level=info msg="Stop container \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" with signal terminated" Jan 17 00:09:33.178694 systemd-networkd[1403]: lxc_health: Link DOWN Jan 17 00:09:33.178700 systemd-networkd[1403]: lxc_health: Lost carrier Jan 17 00:09:33.187496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b-rootfs.mount: Deactivated successfully. Jan 17 00:09:33.212528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97-rootfs.mount: Deactivated successfully. Jan 17 00:09:33.277556 containerd[1828]: time="2026-01-17T00:09:33.277498296Z" level=info msg="shim disconnected" id=c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b namespace=k8s.io Jan 17 00:09:33.277556 containerd[1828]: time="2026-01-17T00:09:33.277551696Z" level=warning msg="cleaning up after shim disconnected" id=c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b namespace=k8s.io Jan 17 00:09:33.277556 containerd[1828]: time="2026-01-17T00:09:33.277559856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:33.277849 containerd[1828]: time="2026-01-17T00:09:33.277817736Z" level=info msg="shim disconnected" id=2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97 namespace=k8s.io Jan 17 00:09:33.279684 containerd[1828]: time="2026-01-17T00:09:33.277908616Z" level=warning msg="cleaning up after shim disconnected" id=2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97 namespace=k8s.io Jan 17 00:09:33.279684 containerd[1828]: time="2026-01-17T00:09:33.277933616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:33.290698 containerd[1828]: time="2026-01-17T00:09:33.290653427Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:09:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:09:33.297989 containerd[1828]: time="2026-01-17T00:09:33.297934953Z" level=info msg="StopContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" returns successfully" Jan 17 00:09:33.298693 containerd[1828]: time="2026-01-17T00:09:33.298667074Z" level=info msg="StopPodSandbox for \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\"" Jan 17 00:09:33.298817 containerd[1828]: time="2026-01-17T00:09:33.298801594Z" level=info msg="Container to stop \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.301045 containerd[1828]: time="2026-01-17T00:09:33.299661115Z" level=info msg="StopContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" returns successfully" Jan 17 00:09:33.301208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b-shm.mount: Deactivated successfully. Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301472996Z" level=info msg="StopPodSandbox for \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\"" Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301516396Z" level=info msg="Container to stop \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301529076Z" level=info msg="Container to stop \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301538636Z" level=info msg="Container to stop \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301548636Z" level=info msg="Container to stop \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.301907 containerd[1828]: time="2026-01-17T00:09:33.301558076Z" level=info msg="Container to stop \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:09:33.306013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7-shm.mount: Deactivated successfully. Jan 17 00:09:33.350358 containerd[1828]: time="2026-01-17T00:09:33.350035558Z" level=info msg="shim disconnected" id=aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b namespace=k8s.io Jan 17 00:09:33.351166 containerd[1828]: time="2026-01-17T00:09:33.351144719Z" level=warning msg="cleaning up after shim disconnected" id=aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b namespace=k8s.io Jan 17 00:09:33.351248 containerd[1828]: time="2026-01-17T00:09:33.351236039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:33.352070 containerd[1828]: time="2026-01-17T00:09:33.350561918Z" level=info msg="shim disconnected" id=f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7 namespace=k8s.io Jan 17 00:09:33.352171 containerd[1828]: time="2026-01-17T00:09:33.352157000Z" level=warning msg="cleaning up after shim disconnected" id=f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7 namespace=k8s.io Jan 17 00:09:33.352239 containerd[1828]: time="2026-01-17T00:09:33.352226800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:33.364243 containerd[1828]: time="2026-01-17T00:09:33.364205250Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:09:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:09:33.365352 containerd[1828]: time="2026-01-17T00:09:33.365325371Z" level=info msg="TearDown network for sandbox \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\" successfully" Jan 17 00:09:33.365352 containerd[1828]: time="2026-01-17T00:09:33.365349131Z" level=info msg="StopPodSandbox for \"aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b\" returns successfully" Jan 17 00:09:33.366641 containerd[1828]: time="2026-01-17T00:09:33.366557732Z" level=info msg="TearDown network for sandbox \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" successfully" Jan 17 00:09:33.366641 containerd[1828]: time="2026-01-17T00:09:33.366580572Z" level=info msg="StopPodSandbox for \"f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7\" returns successfully" Jan 17 00:09:33.492931 kubelet[3313]: I0117 00:09:33.492816 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2356051a-97fb-43c1-816f-8d504e798ca2-clustermesh-secrets\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.492931 kubelet[3313]: I0117 00:09:33.492885 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-cilium-config-path\") pod \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\" (UID: \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\") " Jan 17 00:09:33.492931 kubelet[3313]: I0117 00:09:33.492904 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-kernel\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.492931 kubelet[3313]: I0117 00:09:33.492918 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-hostproc\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.492931 kubelet[3313]: I0117 00:09:33.492933 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-cgroup\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.492953 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9h9b\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-kube-api-access-c9h9b\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.492972 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-run\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.492985 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-net\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.493000 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cni-path\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.493015 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-bpf-maps\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.493420 kubelet[3313]: I0117 00:09:33.493028 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-etc-cni-netd\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493044 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-config-path\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493060 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-hubble-tls\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493074 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-lib-modules\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493090 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4275n\" (UniqueName: \"kubernetes.io/projected/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-kube-api-access-4275n\") pod \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\" (UID: \"2a9e5056-a64d-4a85-b2bf-927dfd0eb505\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493107 3313 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-xtables-lock\") pod \"2356051a-97fb-43c1-816f-8d504e798ca2\" (UID: \"2356051a-97fb-43c1-816f-8d504e798ca2\") " Jan 17 00:09:33.496462 kubelet[3313]: I0117 00:09:33.493188 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.496595 kubelet[3313]: I0117 00:09:33.494411 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.496595 kubelet[3313]: I0117 00:09:33.495294 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2356051a-97fb-43c1-816f-8d504e798ca2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:09:33.496595 kubelet[3313]: I0117 00:09:33.495335 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cni-path" (OuterVolumeSpecName: "cni-path") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.496595 kubelet[3313]: I0117 00:09:33.495353 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.496595 kubelet[3313]: I0117 00:09:33.495367 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.497011 kubelet[3313]: I0117 00:09:33.496987 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a9e5056-a64d-4a85-b2bf-927dfd0eb505" (UID: "2a9e5056-a64d-4a85-b2bf-927dfd0eb505"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:09:33.497146 kubelet[3313]: I0117 00:09:33.497115 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.497236 kubelet[3313]: I0117 00:09:33.497211 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:09:33.497236 kubelet[3313]: I0117 00:09:33.497215 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-hostproc" (OuterVolumeSpecName: "hostproc") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.497311 kubelet[3313]: I0117 00:09:33.497298 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.499241 kubelet[3313]: I0117 00:09:33.499214 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:09:33.499316 kubelet[3313]: I0117 00:09:33.499256 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.499583 kubelet[3313]: I0117 00:09:33.499562 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-kube-api-access-c9h9b" (OuterVolumeSpecName: "kube-api-access-c9h9b") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "kube-api-access-c9h9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:09:33.499677 kubelet[3313]: I0117 00:09:33.499665 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2356051a-97fb-43c1-816f-8d504e798ca2" (UID: "2356051a-97fb-43c1-816f-8d504e798ca2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:09:33.501001 kubelet[3313]: I0117 00:09:33.500977 3313 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-kube-api-access-4275n" (OuterVolumeSpecName: "kube-api-access-4275n") pod "2a9e5056-a64d-4a85-b2bf-927dfd0eb505" (UID: "2a9e5056-a64d-4a85-b2bf-927dfd0eb505"). InnerVolumeSpecName "kube-api-access-4275n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:09:33.593297 kubelet[3313]: I0117 00:09:33.593263 3313 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-net\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593456 kubelet[3313]: I0117 00:09:33.593442 3313 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cni-path\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593527 kubelet[3313]: I0117 00:09:33.593517 3313 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-run\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593578 kubelet[3313]: I0117 00:09:33.593570 3313 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-bpf-maps\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593630 kubelet[3313]: I0117 00:09:33.593622 3313 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-etc-cni-netd\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593698 kubelet[3313]: I0117 00:09:33.593688 3313 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-config-path\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593753 kubelet[3313]: I0117 00:09:33.593743 3313 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-hubble-tls\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593804 kubelet[3313]: I0117 00:09:33.593795 3313 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-lib-modules\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593852 kubelet[3313]: I0117 00:09:33.593843 3313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4275n\" (UniqueName: \"kubernetes.io/projected/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-kube-api-access-4275n\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593904 kubelet[3313]: I0117 00:09:33.593896 3313 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-xtables-lock\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.593961 kubelet[3313]: I0117 00:09:33.593952 3313 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2356051a-97fb-43c1-816f-8d504e798ca2-clustermesh-secrets\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.594016 kubelet[3313]: I0117 00:09:33.594006 3313 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a9e5056-a64d-4a85-b2bf-927dfd0eb505-cilium-config-path\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.594071 kubelet[3313]: I0117 00:09:33.594061 3313 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-hostproc\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.594159 kubelet[3313]: I0117 00:09:33.594115 3313 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-cilium-cgroup\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.594227 kubelet[3313]: I0117 00:09:33.594217 3313 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2356051a-97fb-43c1-816f-8d504e798ca2-host-proc-sys-kernel\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.594280 kubelet[3313]: I0117 00:09:33.594271 3313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c9h9b\" (UniqueName: \"kubernetes.io/projected/2356051a-97fb-43c1-816f-8d504e798ca2-kube-api-access-c9h9b\") on node \"ci-4081.3.6-n-93f9562822\" DevicePath \"\"" Jan 17 00:09:33.672023 kubelet[3313]: I0117 00:09:33.671993 3313 scope.go:117] "RemoveContainer" containerID="c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b" Jan 17 00:09:33.679235 containerd[1828]: time="2026-01-17T00:09:33.678857441Z" level=info msg="RemoveContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\"" Jan 17 00:09:33.691993 containerd[1828]: time="2026-01-17T00:09:33.691911732Z" level=info msg="RemoveContainer for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" returns successfully" Jan 17 00:09:33.692651 kubelet[3313]: I0117 00:09:33.692431 3313 scope.go:117] "RemoveContainer" containerID="c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b" Jan 17 00:09:33.693341 containerd[1828]: time="2026-01-17T00:09:33.693222813Z" level=error msg="ContainerStatus for \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\": not found" Jan 17 00:09:33.693417 kubelet[3313]: E0117 00:09:33.693384 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\": not found" containerID="c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b" Jan 17 00:09:33.693819 kubelet[3313]: I0117 00:09:33.693416 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b"} err="failed to get container status \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c20931f867bb10deef9ece30e967cf42f8b0fd63b548277c8ba7023b7bbee69b\": not found" Jan 17 00:09:33.693819 kubelet[3313]: I0117 00:09:33.693518 3313 scope.go:117] "RemoveContainer" containerID="2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97" Jan 17 00:09:33.695706 containerd[1828]: time="2026-01-17T00:09:33.695454935Z" level=info msg="RemoveContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\"" Jan 17 00:09:33.702827 containerd[1828]: time="2026-01-17T00:09:33.702572181Z" level=info msg="RemoveContainer for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" returns successfully" Jan 17 00:09:33.702902 kubelet[3313]: I0117 00:09:33.702751 3313 scope.go:117] "RemoveContainer" containerID="603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716" Jan 17 00:09:33.703784 containerd[1828]: time="2026-01-17T00:09:33.703760102Z" level=info msg="RemoveContainer for \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\"" Jan 17 00:09:33.711174 containerd[1828]: time="2026-01-17T00:09:33.710619388Z" level=info msg="RemoveContainer for \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\" returns successfully" Jan 17 00:09:33.712085 kubelet[3313]: I0117 00:09:33.712016 3313 scope.go:117] "RemoveContainer" containerID="c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d" Jan 17 00:09:33.714337 containerd[1828]: time="2026-01-17T00:09:33.713831671Z" level=info msg="RemoveContainer for \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\"" Jan 17 00:09:33.720862 containerd[1828]: time="2026-01-17T00:09:33.720833157Z" level=info msg="RemoveContainer for \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\" returns successfully" Jan 17 00:09:33.721116 kubelet[3313]: I0117 00:09:33.721095 3313 scope.go:117] "RemoveContainer" containerID="e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da" Jan 17 00:09:33.722459 containerd[1828]: time="2026-01-17T00:09:33.722255718Z" level=info msg="RemoveContainer for \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\"" Jan 17 00:09:33.730386 containerd[1828]: time="2026-01-17T00:09:33.730328085Z" level=info msg="RemoveContainer for \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\" returns successfully" Jan 17 00:09:33.730552 kubelet[3313]: I0117 00:09:33.730507 3313 scope.go:117] "RemoveContainer" containerID="0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde" Jan 17 00:09:33.731733 containerd[1828]: time="2026-01-17T00:09:33.731506686Z" level=info msg="RemoveContainer for \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\"" Jan 17 00:09:33.738305 containerd[1828]: time="2026-01-17T00:09:33.738277452Z" level=info msg="RemoveContainer for \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\" returns successfully" Jan 17 00:09:33.738538 kubelet[3313]: I0117 00:09:33.738512 3313 scope.go:117] "RemoveContainer" containerID="2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97" Jan 17 00:09:33.738813 containerd[1828]: time="2026-01-17T00:09:33.738731252Z" level=error msg="ContainerStatus for \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\": not found" Jan 17 00:09:33.738934 kubelet[3313]: E0117 00:09:33.738863 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\": not found" containerID="2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97" Jan 17 00:09:33.738934 kubelet[3313]: I0117 00:09:33.738887 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97"} err="failed to get container status \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b06cadd2df8b4b2fb030b04d212e2ebee841c0b0454bce0267699213d9b8c97\": not found" Jan 17 00:09:33.738934 kubelet[3313]: I0117 00:09:33.738919 3313 scope.go:117] "RemoveContainer" containerID="603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716" Jan 17 00:09:33.739132 containerd[1828]: time="2026-01-17T00:09:33.739095292Z" level=error msg="ContainerStatus for \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\": not found" Jan 17 00:09:33.739317 kubelet[3313]: E0117 00:09:33.739240 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\": not found" containerID="603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716" Jan 17 00:09:33.739317 kubelet[3313]: I0117 00:09:33.739269 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716"} err="failed to get container status \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\": rpc error: code = NotFound desc = an error occurred when try to find container \"603da8a7ab82eb6bca1a58df6a4f8aea78b49ddeb5a03ebb6cd98afc4619d716\": not found" Jan 17 00:09:33.739317 kubelet[3313]: I0117 00:09:33.739289 3313 scope.go:117] "RemoveContainer" containerID="c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d" Jan 17 00:09:33.739782 containerd[1828]: time="2026-01-17T00:09:33.739529133Z" level=error msg="ContainerStatus for \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\": not found" Jan 17 00:09:33.739843 kubelet[3313]: E0117 00:09:33.739669 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\": not found" containerID="c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d" Jan 17 00:09:33.739843 kubelet[3313]: I0117 00:09:33.739693 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d"} err="failed to get container status \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c848c707547aefa2ef6cfe5b636926ff020ad2272c108f65c04694038201308d\": not found" Jan 17 00:09:33.739843 kubelet[3313]: I0117 00:09:33.739711 3313 scope.go:117] "RemoveContainer" containerID="e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da" Jan 17 00:09:33.740152 containerd[1828]: time="2026-01-17T00:09:33.740068773Z" level=error msg="ContainerStatus for \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\": not found" Jan 17 00:09:33.740200 kubelet[3313]: E0117 00:09:33.740171 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\": not found" containerID="e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da" Jan 17 00:09:33.740243 kubelet[3313]: I0117 00:09:33.740191 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da"} err="failed to get container status \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\": rpc error: code = NotFound desc = an error occurred when try to find container \"e15555ed9bf6f4612eb585b247d27bf749e496d22d151a1226bc16891e3d94da\": not found" Jan 17 00:09:33.740243 kubelet[3313]: I0117 00:09:33.740207 3313 scope.go:117] "RemoveContainer" containerID="0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde" Jan 17 00:09:33.740502 kubelet[3313]: E0117 00:09:33.740425 3313 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\": not found" containerID="0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde" Jan 17 00:09:33.740502 kubelet[3313]: I0117 00:09:33.740442 3313 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde"} err="failed to get container status \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\": not found" Jan 17 00:09:33.740582 containerd[1828]: time="2026-01-17T00:09:33.740337214Z" level=error msg="ContainerStatus for \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bf88918bb3475a5189ed00d0b8f94ed8542fb8dd74f0d9d13b799fc1d0cecde\": not found" Jan 17 00:09:34.138558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa39fb485aaeb57313c71801e41f559cb04f307e49cd1f0a5d4d3c09108f980b-rootfs.mount: Deactivated successfully. Jan 17 00:09:34.138704 systemd[1]: var-lib-kubelet-pods-2a9e5056\x2da64d\x2d4a85\x2db2bf\x2d927dfd0eb505-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4275n.mount: Deactivated successfully. Jan 17 00:09:34.138788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f96db44cd6d37279f8096baa01e1cc02443d376d1f4d5c9ac119c5df69cd2df7-rootfs.mount: Deactivated successfully. Jan 17 00:09:34.138883 systemd[1]: var-lib-kubelet-pods-2356051a\x2d97fb\x2d43c1\x2d816f\x2d8d504e798ca2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9h9b.mount: Deactivated successfully. Jan 17 00:09:34.138978 systemd[1]: var-lib-kubelet-pods-2356051a\x2d97fb\x2d43c1\x2d816f\x2d8d504e798ca2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:09:34.139053 systemd[1]: var-lib-kubelet-pods-2356051a\x2d97fb\x2d43c1\x2d816f\x2d8d504e798ca2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:09:35.144711 sshd[4907]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:35.147761 systemd-logind[1786]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:09:35.147990 systemd[1]: sshd@22-10.200.20.22:22-10.200.16.10:58462.service: Deactivated successfully. Jan 17 00:09:35.151189 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:09:35.153387 systemd-logind[1786]: Removed session 25. Jan 17 00:09:35.221327 systemd[1]: Started sshd@23-10.200.20.22:22-10.200.16.10:58468.service - OpenSSH per-connection server daemon (10.200.16.10:58468). Jan 17 00:09:35.356885 kubelet[3313]: I0117 00:09:35.356155 3313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2356051a-97fb-43c1-816f-8d504e798ca2" path="/var/lib/kubelet/pods/2356051a-97fb-43c1-816f-8d504e798ca2/volumes" Jan 17 00:09:35.356885 kubelet[3313]: I0117 00:09:35.356665 3313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9e5056-a64d-4a85-b2bf-927dfd0eb505" path="/var/lib/kubelet/pods/2a9e5056-a64d-4a85-b2bf-927dfd0eb505/volumes" Jan 17 00:09:35.666642 sshd[5070]: Accepted publickey for core from 10.200.16.10 port 58468 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:35.667955 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:35.672287 systemd-logind[1786]: New session 26 of user core. Jan 17 00:09:35.675715 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:09:36.435153 kubelet[3313]: E0117 00:09:36.434525 3313 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:09:37.023012 kubelet[3313]: I0117 00:09:37.020576 3313 memory_manager.go:355] "RemoveStaleState removing state" podUID="2a9e5056-a64d-4a85-b2bf-927dfd0eb505" containerName="cilium-operator" Jan 17 00:09:37.023012 kubelet[3313]: I0117 00:09:37.020607 3313 memory_manager.go:355] "RemoveStaleState removing state" podUID="2356051a-97fb-43c1-816f-8d504e798ca2" containerName="cilium-agent" Jan 17 00:09:37.053968 sshd[5070]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:37.058960 systemd[1]: sshd@23-10.200.20.22:22-10.200.16.10:58468.service: Deactivated successfully. Jan 17 00:09:37.067178 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:09:37.067553 systemd-logind[1786]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:09:37.069571 systemd-logind[1786]: Removed session 26. Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113615 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-cilium-config-path\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113654 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-hostproc\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113671 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-lib-modules\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113687 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-xtables-lock\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113702 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-clustermesh-secrets\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.113957 kubelet[3313]: I0117 00:09:37.113718 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-hubble-tls\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113733 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-cni-path\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113750 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2b5\" (UniqueName: \"kubernetes.io/projected/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-kube-api-access-rj2b5\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113766 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-etc-cni-netd\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113784 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-cilium-run\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113801 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-bpf-maps\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114220 kubelet[3313]: I0117 00:09:37.113816 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-cilium-cgroup\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114350 kubelet[3313]: I0117 00:09:37.113831 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-host-proc-sys-net\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114350 kubelet[3313]: I0117 00:09:37.113846 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-cilium-ipsec-secrets\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.114350 kubelet[3313]: I0117 00:09:37.113860 3313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a3b853e-15d5-42aa-b5d5-e8e06df993c7-host-proc-sys-kernel\") pod \"cilium-qmscj\" (UID: \"1a3b853e-15d5-42aa-b5d5-e8e06df993c7\") " pod="kube-system/cilium-qmscj" Jan 17 00:09:37.140405 systemd[1]: Started sshd@24-10.200.20.22:22-10.200.16.10:58476.service - OpenSSH per-connection server daemon (10.200.16.10:58476). Jan 17 00:09:37.326189 containerd[1828]: time="2026-01-17T00:09:37.325476736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmscj,Uid:1a3b853e-15d5-42aa-b5d5-e8e06df993c7,Namespace:kube-system,Attempt:0,}" Jan 17 00:09:37.359027 containerd[1828]: time="2026-01-17T00:09:37.358914845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:09:37.359027 containerd[1828]: time="2026-01-17T00:09:37.358972285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:09:37.359027 containerd[1828]: time="2026-01-17T00:09:37.358987165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:09:37.359373 containerd[1828]: time="2026-01-17T00:09:37.359070525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:09:37.388156 containerd[1828]: time="2026-01-17T00:09:37.388111670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmscj,Uid:1a3b853e-15d5-42aa-b5d5-e8e06df993c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\"" Jan 17 00:09:37.392094 containerd[1828]: time="2026-01-17T00:09:37.391983594Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:09:37.423617 containerd[1828]: time="2026-01-17T00:09:37.423570221Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7442157feadabfe844cf04394a2b323068505f920ce0147af042954fe204ae01\"" Jan 17 00:09:37.424711 containerd[1828]: time="2026-01-17T00:09:37.424642942Z" level=info msg="StartContainer for \"7442157feadabfe844cf04394a2b323068505f920ce0147af042954fe204ae01\"" Jan 17 00:09:37.474624 containerd[1828]: time="2026-01-17T00:09:37.474342224Z" level=info msg="StartContainer for \"7442157feadabfe844cf04394a2b323068505f920ce0147af042954fe204ae01\" returns successfully" Jan 17 00:09:37.540575 containerd[1828]: time="2026-01-17T00:09:37.540417001Z" level=info msg="shim disconnected" id=7442157feadabfe844cf04394a2b323068505f920ce0147af042954fe204ae01 namespace=k8s.io Jan 17 00:09:37.540575 containerd[1828]: time="2026-01-17T00:09:37.540471521Z" level=warning msg="cleaning up after shim disconnected" id=7442157feadabfe844cf04394a2b323068505f920ce0147af042954fe204ae01 namespace=k8s.io Jan 17 00:09:37.540575 containerd[1828]: time="2026-01-17T00:09:37.540480681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:37.621615 sshd[5085]: Accepted publickey for core from 10.200.16.10 port 58476 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:37.622923 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:37.626740 systemd-logind[1786]: New session 27 of user core. Jan 17 00:09:37.637789 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:09:37.694701 containerd[1828]: time="2026-01-17T00:09:37.694659134Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:09:37.720220 containerd[1828]: time="2026-01-17T00:09:37.720179516Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa9d149ad451a6ffce53b119d754e2a903cd8880c45146955e20980dd4adcba8\"" Jan 17 00:09:37.720718 containerd[1828]: time="2026-01-17T00:09:37.720695196Z" level=info msg="StartContainer for \"aa9d149ad451a6ffce53b119d754e2a903cd8880c45146955e20980dd4adcba8\"" Jan 17 00:09:37.768235 containerd[1828]: time="2026-01-17T00:09:37.767831717Z" level=info msg="StartContainer for \"aa9d149ad451a6ffce53b119d754e2a903cd8880c45146955e20980dd4adcba8\" returns successfully" Jan 17 00:09:37.798086 containerd[1828]: time="2026-01-17T00:09:37.797880383Z" level=info msg="shim disconnected" id=aa9d149ad451a6ffce53b119d754e2a903cd8880c45146955e20980dd4adcba8 namespace=k8s.io Jan 17 00:09:37.798086 containerd[1828]: time="2026-01-17T00:09:37.798080503Z" level=warning msg="cleaning up after shim disconnected" id=aa9d149ad451a6ffce53b119d754e2a903cd8880c45146955e20980dd4adcba8 namespace=k8s.io Jan 17 00:09:37.798086 containerd[1828]: time="2026-01-17T00:09:37.798090423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:37.807602 containerd[1828]: time="2026-01-17T00:09:37.807556391Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:09:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:09:37.969903 sshd[5085]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:37.973588 systemd[1]: sshd@24-10.200.20.22:22-10.200.16.10:58476.service: Deactivated successfully. Jan 17 00:09:37.975670 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:09:37.977109 systemd-logind[1786]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:09:37.978012 systemd-logind[1786]: Removed session 27. Jan 17 00:09:38.048347 systemd[1]: Started sshd@25-10.200.20.22:22-10.200.16.10:58486.service - OpenSSH per-connection server daemon (10.200.16.10:58486). Jan 17 00:09:38.494679 sshd[5262]: Accepted publickey for core from 10.200.16.10 port 58486 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:38.495977 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:38.500954 systemd-logind[1786]: New session 28 of user core. Jan 17 00:09:38.504040 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:09:38.696529 containerd[1828]: time="2026-01-17T00:09:38.696407955Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:09:38.735762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401056874.mount: Deactivated successfully. Jan 17 00:09:38.743150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316753550.mount: Deactivated successfully. Jan 17 00:09:38.752208 containerd[1828]: time="2026-01-17T00:09:38.749931401Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d\"" Jan 17 00:09:38.753328 containerd[1828]: time="2026-01-17T00:09:38.753152004Z" level=info msg="StartContainer for \"dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d\"" Jan 17 00:09:38.837715 containerd[1828]: time="2026-01-17T00:09:38.837595277Z" level=info msg="StartContainer for \"dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d\" returns successfully" Jan 17 00:09:38.879298 containerd[1828]: time="2026-01-17T00:09:38.879242073Z" level=info msg="shim disconnected" id=dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d namespace=k8s.io Jan 17 00:09:38.879298 containerd[1828]: time="2026-01-17T00:09:38.879295193Z" level=warning msg="cleaning up after shim disconnected" id=dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d namespace=k8s.io Jan 17 00:09:38.879491 containerd[1828]: time="2026-01-17T00:09:38.879304033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:39.220582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbedf40e8473867c870a8d2ba511d2e20cd2281482c184aeea96b380aee3d45d-rootfs.mount: Deactivated successfully. Jan 17 00:09:39.699271 containerd[1828]: time="2026-01-17T00:09:39.698837073Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:09:39.736584 containerd[1828]: time="2026-01-17T00:09:39.736501707Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4\"" Jan 17 00:09:39.737018 containerd[1828]: time="2026-01-17T00:09:39.736934107Z" level=info msg="StartContainer for \"fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4\"" Jan 17 00:09:39.785718 containerd[1828]: time="2026-01-17T00:09:39.785637591Z" level=info msg="StartContainer for \"fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4\" returns successfully" Jan 17 00:09:39.801663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4-rootfs.mount: Deactivated successfully. Jan 17 00:09:39.813210 containerd[1828]: time="2026-01-17T00:09:39.812981776Z" level=info msg="shim disconnected" id=fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4 namespace=k8s.io Jan 17 00:09:39.813210 containerd[1828]: time="2026-01-17T00:09:39.813062216Z" level=warning msg="cleaning up after shim disconnected" id=fbae6abfd5337fe767fc467f9a8fec86ad40cc47deb21a70e5b4b5c208f24fc4 namespace=k8s.io Jan 17 00:09:39.813210 containerd[1828]: time="2026-01-17T00:09:39.813071056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:09:40.704199 containerd[1828]: time="2026-01-17T00:09:40.704046457Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:09:40.736820 containerd[1828]: time="2026-01-17T00:09:40.736713327Z" level=info msg="CreateContainer within sandbox \"b13e6fb42ec4314de204df12bdc09b0e53f6e72e9b51ae7f8ade2c69e7a32cff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d19d055e5a80196cb91e0e1955fcd999d7c91b7927e8d46930773aaba75bcb64\"" Jan 17 00:09:40.737675 containerd[1828]: time="2026-01-17T00:09:40.737085447Z" level=info msg="StartContainer for \"d19d055e5a80196cb91e0e1955fcd999d7c91b7927e8d46930773aaba75bcb64\"" Jan 17 00:09:40.791209 containerd[1828]: time="2026-01-17T00:09:40.791171856Z" level=info msg="StartContainer for \"d19d055e5a80196cb91e0e1955fcd999d7c91b7927e8d46930773aaba75bcb64\" returns successfully" Jan 17 00:09:41.314184 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 00:09:41.725794 kubelet[3313]: I0117 00:09:41.724947 3313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qmscj" podStartSLOduration=5.724931296 podStartE2EDuration="5.724931296s" podCreationTimestamp="2026-01-17 00:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:09:41.724185456 +0000 UTC m=+160.530564190" watchObservedRunningTime="2026-01-17 00:09:41.724931296 +0000 UTC m=+160.531310030" Jan 17 00:09:43.077946 systemd[1]: run-containerd-runc-k8s.io-d19d055e5a80196cb91e0e1955fcd999d7c91b7927e8d46930773aaba75bcb64-runc.kYJhkh.mount: Deactivated successfully. Jan 17 00:09:43.998221 systemd-networkd[1403]: lxc_health: Link UP Jan 17 00:09:44.005355 systemd-networkd[1403]: lxc_health: Gained carrier Jan 17 00:09:45.220324 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 17 00:09:49.638340 sshd[5262]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:49.640949 systemd[1]: sshd@25-10.200.20.22:22-10.200.16.10:58486.service: Deactivated successfully. Jan 17 00:09:49.643807 systemd-logind[1786]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:09:49.645018 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:09:49.646965 systemd-logind[1786]: Removed session 28.