Jan 13 23:36:36.544313 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 13 23:36:36.544346 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 21:43:11 -00 2026 Jan 13 23:36:36.544353 kernel: KASLR enabled Jan 13 23:36:36.544358 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 13 23:36:36.544363 kernel: printk: legacy bootconsole [pl11] enabled Jan 13 23:36:36.544367 kernel: efi: EFI v2.7 by EDK II Jan 13 23:36:36.544372 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 13 23:36:36.544377 kernel: random: crng init done Jan 13 23:36:36.544381 kernel: secureboot: Secure boot disabled Jan 13 23:36:36.544385 kernel: ACPI: Early table checksum verification disabled Jan 13 23:36:36.544389 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 13 23:36:36.544393 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544398 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544403 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 13 23:36:36.544408 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544413 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544417 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544423 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544427 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544432 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544436 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 13 23:36:36.544440 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 13 23:36:36.544445 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 13 23:36:36.544449 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 13 23:36:36.544453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 13 23:36:36.544458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 13 23:36:36.544462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 13 23:36:36.544468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 13 23:36:36.544472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 13 23:36:36.544477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 13 23:36:36.544481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 13 23:36:36.544485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 13 23:36:36.544490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 13 23:36:36.544494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 13 23:36:36.544498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 13 23:36:36.544503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 13 23:36:36.544507 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 13 23:36:36.544512 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 13 23:36:36.544517 kernel: Zone ranges: Jan 13 23:36:36.544522 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 13 23:36:36.544529 kernel: DMA32 empty Jan 13 23:36:36.544533 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 13 23:36:36.544538 kernel: Device empty Jan 13 23:36:36.544544 kernel: Movable zone start for each node Jan 13 23:36:36.544548 kernel: Early memory node ranges Jan 13 23:36:36.544553 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 13 23:36:36.544557 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 13 23:36:36.544562 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 13 23:36:36.544567 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 13 23:36:36.544572 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 13 23:36:36.544576 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 13 23:36:36.544581 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 13 23:36:36.544586 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 13 23:36:36.544591 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 13 23:36:36.544596 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 13 23:36:36.544600 kernel: psci: probing for conduit method from ACPI. Jan 13 23:36:36.544605 kernel: psci: PSCIv1.3 detected in firmware. Jan 13 23:36:36.544610 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 23:36:36.544614 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 13 23:36:36.544619 kernel: psci: SMC Calling Convention v1.4 Jan 13 23:36:36.544624 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 13 23:36:36.544628 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 13 23:36:36.544633 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 13 23:36:36.544638 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 13 23:36:36.544643 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 23:36:36.544648 kernel: Detected PIPT I-cache on CPU0 Jan 13 23:36:36.544653 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 13 23:36:36.544657 kernel: CPU features: detected: GIC system register CPU interface Jan 13 23:36:36.544662 kernel: CPU features: detected: Spectre-v4 Jan 13 23:36:36.544667 kernel: CPU features: detected: Spectre-BHB Jan 13 23:36:36.544671 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 23:36:36.544676 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 23:36:36.544681 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 13 23:36:36.544686 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 23:36:36.544691 kernel: alternatives: applying boot alternatives Jan 13 23:36:36.544697 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:36:36.544702 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 23:36:36.544707 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 23:36:36.544711 kernel: Fallback order for Node 0: 0 Jan 13 23:36:36.544716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 13 23:36:36.544721 kernel: Policy zone: Normal Jan 13 23:36:36.544725 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 23:36:36.544730 kernel: software IO TLB: area num 2. Jan 13 23:36:36.544735 kernel: software IO TLB: mapped [mem 0x0000000037360000-0x000000003b360000] (64MB) Jan 13 23:36:36.544739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 23:36:36.544745 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 23:36:36.544751 kernel: rcu: RCU event tracing is enabled. Jan 13 23:36:36.544755 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 23:36:36.544760 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 23:36:36.544765 kernel: Tracing variant of Tasks RCU enabled. Jan 13 23:36:36.544769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 23:36:36.544774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 23:36:36.544779 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:36:36.544784 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 23:36:36.544788 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 23:36:36.544793 kernel: GICv3: 960 SPIs implemented Jan 13 23:36:36.544799 kernel: GICv3: 0 Extended SPIs implemented Jan 13 23:36:36.544803 kernel: Root IRQ handler: gic_handle_irq Jan 13 23:36:36.544808 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 13 23:36:36.544813 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 13 23:36:36.544817 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 13 23:36:36.544822 kernel: ITS: No ITS available, not enabling LPIs Jan 13 23:36:36.544827 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 23:36:36.544832 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 13 23:36:36.544837 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 23:36:36.544842 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 13 23:36:36.544846 kernel: Console: colour dummy device 80x25 Jan 13 23:36:36.544852 kernel: printk: legacy console [tty1] enabled Jan 13 23:36:36.544857 kernel: ACPI: Core revision 20240827 Jan 13 23:36:36.544862 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 13 23:36:36.544867 kernel: pid_max: default: 32768 minimum: 301 Jan 13 23:36:36.544872 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 13 23:36:36.544877 kernel: landlock: Up and running. Jan 13 23:36:36.544882 kernel: SELinux: Initializing. Jan 13 23:36:36.544888 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:36:36.544893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 23:36:36.544898 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 13 23:36:36.544903 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 13 23:36:36.544912 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 13 23:36:36.544918 kernel: rcu: Hierarchical SRCU implementation. Jan 13 23:36:36.544923 kernel: rcu: Max phase no-delay instances is 400. Jan 13 23:36:36.544929 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 13 23:36:36.544934 kernel: Remapping and enabling EFI services. Jan 13 23:36:36.544940 kernel: smp: Bringing up secondary CPUs ... Jan 13 23:36:36.544945 kernel: Detected PIPT I-cache on CPU1 Jan 13 23:36:36.544950 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 13 23:36:36.544955 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 13 23:36:36.544961 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 23:36:36.544966 kernel: SMP: Total of 2 processors activated. Jan 13 23:36:36.544972 kernel: CPU: All CPU(s) started at EL1 Jan 13 23:36:36.544977 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 23:36:36.544982 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 13 23:36:36.544988 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 23:36:36.544993 kernel: CPU features: detected: Common not Private translations Jan 13 23:36:36.544999 kernel: CPU features: detected: CRC32 instructions Jan 13 23:36:36.545004 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 13 23:36:36.545009 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 23:36:36.545015 kernel: CPU features: detected: LSE atomic instructions Jan 13 23:36:36.545020 kernel: CPU features: detected: Privileged Access Never Jan 13 23:36:36.545025 kernel: CPU features: detected: Speculation barrier (SB) Jan 13 23:36:36.545030 kernel: CPU features: detected: TLB range maintenance instructions Jan 13 23:36:36.545036 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 23:36:36.545041 kernel: CPU features: detected: Scalable Vector Extension Jan 13 23:36:36.545046 kernel: alternatives: applying system-wide alternatives Jan 13 23:36:36.545052 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 13 23:36:36.545057 kernel: SVE: maximum available vector length 16 bytes per vector Jan 13 23:36:36.545062 kernel: SVE: default vector length 16 bytes per vector Jan 13 23:36:36.545067 kernel: Memory: 3979836K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 193136K reserved, 16384K cma-reserved) Jan 13 23:36:36.545074 kernel: devtmpfs: initialized Jan 13 23:36:36.545079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 23:36:36.545084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 23:36:36.545089 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 23:36:36.545094 kernel: 0 pages in range for non-PLT usage Jan 13 23:36:36.545100 kernel: 515152 pages in range for PLT usage Jan 13 23:36:36.545105 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 23:36:36.545111 kernel: SMBIOS 3.1.0 present. Jan 13 23:36:36.545116 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 13 23:36:36.545121 kernel: DMI: Memory slots populated: 2/2 Jan 13 23:36:36.545126 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 23:36:36.545132 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 23:36:36.545137 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 23:36:36.545142 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 23:36:36.545147 kernel: audit: initializing netlink subsys (disabled) Jan 13 23:36:36.545154 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 13 23:36:36.545159 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 23:36:36.545164 kernel: cpuidle: using governor menu Jan 13 23:36:36.545169 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 23:36:36.545174 kernel: ASID allocator initialised with 32768 entries Jan 13 23:36:36.545179 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 23:36:36.545185 kernel: Serial: AMBA PL011 UART driver Jan 13 23:36:36.545190 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 23:36:36.545196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 23:36:36.545201 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 23:36:36.545206 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 23:36:36.545211 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 23:36:36.545217 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 23:36:36.545222 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 23:36:36.545228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 23:36:36.545233 kernel: ACPI: Added _OSI(Module Device) Jan 13 23:36:36.545238 kernel: ACPI: Added _OSI(Processor Device) Jan 13 23:36:36.545244 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 23:36:36.545249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 23:36:36.545254 kernel: ACPI: Interpreter enabled Jan 13 23:36:36.545259 kernel: ACPI: Using GIC for interrupt routing Jan 13 23:36:36.545265 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 13 23:36:36.545270 kernel: printk: legacy console [ttyAMA0] enabled Jan 13 23:36:36.545275 kernel: printk: legacy bootconsole [pl11] disabled Jan 13 23:36:36.545280 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 13 23:36:36.545286 kernel: ACPI: CPU0 has been hot-added Jan 13 23:36:36.545291 kernel: ACPI: CPU1 has been hot-added Jan 13 23:36:36.545296 kernel: iommu: Default domain type: Translated Jan 13 23:36:36.545302 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 23:36:36.545307 kernel: efivars: Registered efivars operations Jan 13 23:36:36.545312 kernel: vgaarb: loaded Jan 13 23:36:36.545318 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 23:36:36.545323 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 23:36:36.545328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 23:36:36.545338 kernel: pnp: PnP ACPI init Jan 13 23:36:36.545344 kernel: pnp: PnP ACPI: found 0 devices Jan 13 23:36:36.545350 kernel: NET: Registered PF_INET protocol family Jan 13 23:36:36.545355 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 23:36:36.545360 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 23:36:36.545365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 23:36:36.545371 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:36:36.545376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 23:36:36.545382 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 23:36:36.545387 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:36:36.545393 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 23:36:36.545398 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 23:36:36.545403 kernel: PCI: CLS 0 bytes, default 64 Jan 13 23:36:36.545408 kernel: kvm [1]: HYP mode not available Jan 13 23:36:36.545413 kernel: Initialise system trusted keyrings Jan 13 23:36:36.545418 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 23:36:36.545425 kernel: Key type asymmetric registered Jan 13 23:36:36.545430 kernel: Asymmetric key parser 'x509' registered Jan 13 23:36:36.545435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 13 23:36:36.545440 kernel: io scheduler mq-deadline registered Jan 13 23:36:36.545445 kernel: io scheduler kyber registered Jan 13 23:36:36.545450 kernel: io scheduler bfq registered Jan 13 23:36:36.545455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 23:36:36.545461 kernel: thunder_xcv, ver 1.0 Jan 13 23:36:36.545466 kernel: thunder_bgx, ver 1.0 Jan 13 23:36:36.545471 kernel: nicpf, ver 1.0 Jan 13 23:36:36.545476 kernel: nicvf, ver 1.0 Jan 13 23:36:36.545629 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 23:36:36.545700 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-13T23:36:32 UTC (1768347392) Jan 13 23:36:36.545708 kernel: efifb: probing for efifb Jan 13 23:36:36.545714 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 13 23:36:36.545719 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 13 23:36:36.545724 kernel: efifb: scrolling: redraw Jan 13 23:36:36.545729 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 13 23:36:36.545735 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 23:36:36.545740 kernel: fb0: EFI VGA frame buffer device Jan 13 23:36:36.545746 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 13 23:36:36.545751 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 23:36:36.545757 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 13 23:36:36.545762 kernel: watchdog: NMI not fully supported Jan 13 23:36:36.545767 kernel: NET: Registered PF_INET6 protocol family Jan 13 23:36:36.545772 kernel: watchdog: Hard watchdog permanently disabled Jan 13 23:36:36.545777 kernel: Segment Routing with IPv6 Jan 13 23:36:36.545784 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 23:36:36.545789 kernel: NET: Registered PF_PACKET protocol family Jan 13 23:36:36.545795 kernel: Key type dns_resolver registered Jan 13 23:36:36.545800 kernel: registered taskstats version 1 Jan 13 23:36:36.545805 kernel: Loading compiled-in X.509 certificates Jan 13 23:36:36.545810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 61f104a5e4017e43c6bf0c9744e6a522053d7383' Jan 13 23:36:36.545815 kernel: Demotion targets for Node 0: null Jan 13 23:36:36.545822 kernel: Key type .fscrypt registered Jan 13 23:36:36.545827 kernel: Key type fscrypt-provisioning registered Jan 13 23:36:36.545832 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 23:36:36.545837 kernel: ima: Allocated hash algorithm: sha1 Jan 13 23:36:36.545842 kernel: ima: No architecture policies found Jan 13 23:36:36.545847 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 23:36:36.545852 kernel: clk: Disabling unused clocks Jan 13 23:36:36.545858 kernel: PM: genpd: Disabling unused power domains Jan 13 23:36:36.545864 kernel: Freeing unused kernel memory: 12480K Jan 13 23:36:36.545869 kernel: Run /init as init process Jan 13 23:36:36.545874 kernel: with arguments: Jan 13 23:36:36.545879 kernel: /init Jan 13 23:36:36.545884 kernel: with environment: Jan 13 23:36:36.545889 kernel: HOME=/ Jan 13 23:36:36.545894 kernel: TERM=linux Jan 13 23:36:36.545901 kernel: hv_vmbus: Vmbus version:5.3 Jan 13 23:36:36.545906 kernel: SCSI subsystem initialized Jan 13 23:36:36.545911 kernel: hv_vmbus: registering driver hid_hyperv Jan 13 23:36:36.545917 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 13 23:36:36.546004 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 13 23:36:36.546011 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 13 23:36:36.546019 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 13 23:36:36.546025 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 23:36:36.546030 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 23:36:36.546035 kernel: PTP clock support registered Jan 13 23:36:36.546041 kernel: hv_utils: Registering HyperV Utility Driver Jan 13 23:36:36.546046 kernel: hv_vmbus: registering driver hv_utils Jan 13 23:36:36.546051 kernel: hv_utils: Heartbeat IC version 3.0 Jan 13 23:36:36.546057 kernel: hv_utils: Shutdown IC version 3.2 Jan 13 23:36:36.546062 kernel: hv_utils: TimeSync IC version 4.0 Jan 13 23:36:36.546067 kernel: hv_vmbus: registering driver hv_storvsc Jan 13 23:36:36.546164 kernel: scsi host0: storvsc_host_t Jan 13 23:36:36.546249 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 13 23:36:36.546586 kernel: scsi host1: storvsc_host_t Jan 13 23:36:36.546738 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 13 23:36:36.546821 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 13 23:36:36.546895 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 13 23:36:36.546968 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 13 23:36:36.547040 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 13 23:36:36.547112 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 13 23:36:36.547197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 13 23:36:36.547265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 13 23:36:36.547272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 23:36:36.547360 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 13 23:36:36.547434 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 13 23:36:36.547443 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 23:36:36.547514 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 13 23:36:36.547521 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 23:36:36.547526 kernel: device-mapper: uevent: version 1.0.3 Jan 13 23:36:36.547531 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 13 23:36:36.547536 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 13 23:36:36.547560 kernel: raid6: neonx8 gen() 18555 MB/s Jan 13 23:36:36.547568 kernel: raid6: neonx4 gen() 18581 MB/s Jan 13 23:36:36.547574 kernel: raid6: neonx2 gen() 17081 MB/s Jan 13 23:36:36.547579 kernel: raid6: neonx1 gen() 15067 MB/s Jan 13 23:36:36.547584 kernel: raid6: int64x8 gen() 10558 MB/s Jan 13 23:36:36.547589 kernel: raid6: int64x4 gen() 10624 MB/s Jan 13 23:36:36.547594 kernel: raid6: int64x2 gen() 8979 MB/s Jan 13 23:36:36.547600 kernel: raid6: int64x1 gen() 7018 MB/s Jan 13 23:36:36.547606 kernel: raid6: using algorithm neonx4 gen() 18581 MB/s Jan 13 23:36:36.547612 kernel: raid6: .... xor() 15137 MB/s, rmw enabled Jan 13 23:36:36.547617 kernel: raid6: using neon recovery algorithm Jan 13 23:36:36.547622 kernel: xor: measuring software checksum speed Jan 13 23:36:36.547627 kernel: 8regs : 28619 MB/sec Jan 13 23:36:36.547632 kernel: 32regs : 28783 MB/sec Jan 13 23:36:36.547638 kernel: arm64_neon : 37157 MB/sec Jan 13 23:36:36.547643 kernel: xor: using function: arm64_neon (37157 MB/sec) Jan 13 23:36:36.547649 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 23:36:36.547655 kernel: BTRFS: device fsid 96ce121f-260d-446f-a0e2-a59fdf56d58c devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (363) Jan 13 23:36:36.547660 kernel: BTRFS info (device dm-0): first mount of filesystem 96ce121f-260d-446f-a0e2-a59fdf56d58c Jan 13 23:36:36.547666 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:36:36.547671 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 23:36:36.547676 kernel: BTRFS info (device dm-0): enabling free space tree Jan 13 23:36:36.547682 kernel: loop: module loaded Jan 13 23:36:36.547688 kernel: loop0: detected capacity change from 0 to 91840 Jan 13 23:36:36.547693 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 23:36:36.547699 systemd[1]: Successfully made /usr/ read-only. Jan 13 23:36:36.547707 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:36:36.547713 systemd[1]: Detected virtualization microsoft. Jan 13 23:36:36.547720 systemd[1]: Detected architecture arm64. Jan 13 23:36:36.547725 systemd[1]: Running in initrd. Jan 13 23:36:36.547730 systemd[1]: No hostname configured, using default hostname. Jan 13 23:36:36.547736 systemd[1]: Hostname set to . Jan 13 23:36:36.547742 systemd[1]: Initializing machine ID from random generator. Jan 13 23:36:36.547747 systemd[1]: Queued start job for default target initrd.target. Jan 13 23:36:36.547753 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:36:36.547760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:36:36.547766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:36:36.547772 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 23:36:36.547778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:36:36.547784 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 23:36:36.547790 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 23:36:36.547797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:36:36.547803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:36:36.547809 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:36:36.547814 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:36:36.547820 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:36:36.547826 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:36:36.547833 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:36:36.547838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:36:36.547844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:36:36.547850 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:36:36.547855 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 23:36:36.547861 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 13 23:36:36.547867 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:36:36.547878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:36:36.547885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:36:36.547891 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:36:36.547897 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 23:36:36.547903 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 23:36:36.547910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:36:36.547916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 23:36:36.547922 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 13 23:36:36.547928 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 23:36:36.547934 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:36:36.547940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:36:36.547947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:36:36.547953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 23:36:36.547959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:36:36.547984 systemd-journald[501]: Collecting audit messages is enabled. Jan 13 23:36:36.547999 kernel: audit: type=1130 audit(1768347396.531:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.548005 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 23:36:36.548012 systemd-journald[501]: Journal started Jan 13 23:36:36.548028 systemd-journald[501]: Runtime Journal (/run/log/journal/4520e8595f84425caffbd57e14a40af8) is 8M, max 78.3M, 70.3M free. Jan 13 23:36:36.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.562342 kernel: audit: type=1130 audit(1768347396.549:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.562362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:36:36.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.587911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:36:36.598622 kernel: audit: type=1130 audit(1768347396.570:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.598643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 23:36:36.607002 systemd-modules-load[504]: Inserted module 'br_netfilter' Jan 13 23:36:36.610763 kernel: Bridge firewalling registered Jan 13 23:36:36.612182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:36:36.630596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:36:36.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.648741 systemd-tmpfiles[515]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 13 23:36:36.658710 kernel: audit: type=1130 audit(1768347396.639:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.658876 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:36:36.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.681567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:36:36.703165 kernel: audit: type=1130 audit(1768347396.663:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.703190 kernel: audit: type=1130 audit(1768347396.686:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.704107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:36.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.716489 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 23:36:36.730445 kernel: audit: type=1130 audit(1768347396.711:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.735936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:36:36.740989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:36:36.767321 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:36:36.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.791078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:36:36.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.811522 kernel: audit: type=1130 audit(1768347396.773:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.811582 kernel: audit: type=1130 audit(1768347396.796:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.812274 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:36:36.798000 audit: BPF prog-id=6 op=LOAD Jan 13 23:36:36.820888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:36:36.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:36.829480 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 23:36:36.858684 dracut-cmdline[544]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:36:36.936050 systemd-resolved[539]: Positive Trust Anchors: Jan 13 23:36:36.936063 systemd-resolved[539]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:36:36.936065 systemd-resolved[539]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:36:36.936085 systemd-resolved[539]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:36:36.993328 systemd-resolved[539]: Defaulting to hostname 'linux'. Jan 13 23:36:36.994203 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:36:37.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.007674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:36:37.088362 kernel: Loading iSCSI transport class v2.0-870. Jan 13 23:36:37.128373 kernel: iscsi: registered transport (tcp) Jan 13 23:36:37.160795 kernel: iscsi: registered transport (qla4xxx) Jan 13 23:36:37.160820 kernel: QLogic iSCSI HBA Driver Jan 13 23:36:37.210062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:36:37.232471 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:36:37.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.244264 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:36:37.290157 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 23:36:37.302593 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 13 23:36:37.302615 kernel: audit: type=1130 audit(1768347397.293:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.295528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 23:36:37.324299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 23:36:37.351518 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:36:37.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.371000 audit: BPF prog-id=7 op=LOAD Jan 13 23:36:37.373584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:36:37.390390 kernel: audit: type=1130 audit(1768347397.360:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.390416 kernel: audit: type=1334 audit(1768347397.371:17): prog-id=7 op=LOAD Jan 13 23:36:37.390424 kernel: audit: type=1334 audit(1768347397.371:18): prog-id=8 op=LOAD Jan 13 23:36:37.371000 audit: BPF prog-id=8 op=LOAD Jan 13 23:36:37.458170 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:36:37.472201 systemd-udevd[775]: Using default interface naming scheme 'v257'. Jan 13 23:36:37.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.491343 kernel: audit: type=1130 audit(1768347397.472:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.494405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:36:37.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.518348 kernel: audit: type=1130 audit(1768347397.505:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.519487 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 23:36:37.532000 audit: BPF prog-id=9 op=LOAD Jan 13 23:36:37.539718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:36:37.549757 kernel: audit: type=1334 audit(1768347397.532:21): prog-id=9 op=LOAD Jan 13 23:36:37.552566 dracut-pre-trigger[884]: rd.md=0: removing MD RAID activation Jan 13 23:36:37.579560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:36:37.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.602454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:36:37.615289 kernel: audit: type=1130 audit(1768347397.584:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.612301 systemd-networkd[885]: lo: Link UP Jan 13 23:36:37.612306 systemd-networkd[885]: lo: Gained carrier Jan 13 23:36:37.621296 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:36:37.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.630326 systemd[1]: Reached target network.target - Network. Jan 13 23:36:37.648409 kernel: audit: type=1130 audit(1768347397.629:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.669711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:36:37.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.677226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 23:36:37.708014 kernel: audit: type=1130 audit(1768347397.674:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.754355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#177 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 13 23:36:37.798361 kernel: hv_vmbus: registering driver hv_netvsc Jan 13 23:36:37.842963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:36:37.844485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:37.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.852183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:36:37.861327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:36:37.888619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:37.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:37.911343 kernel: hv_netvsc 002248bb-d71a-0022-48bb-d71a002248bb eth0: VF slot 1 added Jan 13 23:36:37.931743 systemd-networkd[885]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:36:37.931756 systemd-networkd[885]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:36:37.940558 systemd-networkd[885]: eth0: Link UP Jan 13 23:36:37.940699 systemd-networkd[885]: eth0: Gained carrier Jan 13 23:36:37.959767 kernel: hv_vmbus: registering driver hv_pci Jan 13 23:36:37.959788 kernel: hv_pci f8133990-aeba-4f4a-9b46-8e771c99af80: PCI VMBus probing: Using version 0x10004 Jan 13 23:36:37.940714 systemd-networkd[885]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:36:37.994394 systemd-networkd[885]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 13 23:36:38.010137 kernel: hv_pci f8133990-aeba-4f4a-9b46-8e771c99af80: PCI host bridge to bus aeba:00 Jan 13 23:36:38.010419 kernel: pci_bus aeba:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 13 23:36:38.010573 kernel: pci_bus aeba:00: No busn resource found for root bus, will use [bus 00-ff] Jan 13 23:36:38.150510 kernel: pci aeba:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 13 23:36:38.156406 kernel: pci aeba:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 13 23:36:38.160409 kernel: pci aeba:00:02.0: enabling Extended Tags Jan 13 23:36:38.175432 kernel: pci aeba:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at aeba:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 13 23:36:38.184719 kernel: pci_bus aeba:00: busn_res: [bus 00-ff] end is updated to 00 Jan 13 23:36:38.184936 kernel: pci aeba:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 13 23:36:38.240079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 13 23:36:38.247161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 23:36:38.279670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 13 23:36:38.304743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 13 23:36:38.327690 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 13 23:36:38.403256 kernel: mlx5_core aeba:00:02.0: enabling device (0000 -> 0002) Jan 13 23:36:38.411952 kernel: mlx5_core aeba:00:02.0: PTM is not supported by PCIe Jan 13 23:36:38.412155 kernel: mlx5_core aeba:00:02.0: firmware version: 16.30.5026 Jan 13 23:36:38.636147 kernel: hv_netvsc 002248bb-d71a-0022-48bb-d71a002248bb eth0: VF registering: eth1 Jan 13 23:36:38.638409 kernel: mlx5_core aeba:00:02.0 eth1: joined to eth0 Jan 13 23:36:38.657133 kernel: mlx5_core aeba:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 13 23:36:38.663514 systemd-networkd[885]: eth1: Interface name change detected, renamed to enP44730s1. Jan 13 23:36:38.670413 kernel: mlx5_core aeba:00:02.0 enP44730s1: renamed from eth1 Jan 13 23:36:38.695401 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 23:36:38.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:38.700350 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:36:38.709474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:36:38.718967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:36:38.728705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 23:36:38.810827 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:36:38.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:38.834351 kernel: mlx5_core aeba:00:02.0 enP44730s1: Link up Jan 13 23:36:38.866745 systemd-networkd[885]: enP44730s1: Link UP Jan 13 23:36:38.869942 kernel: hv_netvsc 002248bb-d71a-0022-48bb-d71a002248bb eth0: Data path switched to VF: enP44730s1 Jan 13 23:36:38.937460 systemd-networkd[885]: enP44730s1: Gained carrier Jan 13 23:36:39.588011 disk-uuid[1010]: Warning: The kernel is still using the old partition table. Jan 13 23:36:39.588011 disk-uuid[1010]: The new table will be used at the next reboot or after you Jan 13 23:36:39.588011 disk-uuid[1010]: run partprobe(8) or kpartx(8) Jan 13 23:36:39.588011 disk-uuid[1010]: The operation has completed successfully. Jan 13 23:36:39.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:39.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:39.598117 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 23:36:39.598444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 23:36:39.607176 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 23:36:39.627296 systemd-networkd[885]: eth0: Gained IPv6LL Jan 13 23:36:39.672482 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1155) Jan 13 23:36:39.672537 kernel: BTRFS info (device sda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:36:39.677164 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:36:39.702809 kernel: BTRFS info (device sda6): turning on async discard Jan 13 23:36:39.702847 kernel: BTRFS info (device sda6): enabling free space tree Jan 13 23:36:39.713360 kernel: BTRFS info (device sda6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:36:39.713990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 23:36:39.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:39.720485 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 23:36:40.896235 ignition[1174]: Ignition 2.24.0 Jan 13 23:36:40.896249 ignition[1174]: Stage: fetch-offline Jan 13 23:36:40.900417 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:36:40.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:40.897203 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:40.908695 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 23:36:40.897216 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:40.897315 ignition[1174]: parsed url from cmdline: "" Jan 13 23:36:40.897317 ignition[1174]: no config URL provided Jan 13 23:36:40.897393 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:36:40.897401 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:36:40.897405 ignition[1174]: failed to fetch config: resource requires networking Jan 13 23:36:40.897963 ignition[1174]: Ignition finished successfully Jan 13 23:36:40.942753 ignition[1180]: Ignition 2.24.0 Jan 13 23:36:40.942758 ignition[1180]: Stage: fetch Jan 13 23:36:40.942981 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:40.942987 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:40.943064 ignition[1180]: parsed url from cmdline: "" Jan 13 23:36:40.943067 ignition[1180]: no config URL provided Jan 13 23:36:40.943071 ignition[1180]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:36:40.943076 ignition[1180]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:36:40.943091 ignition[1180]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 13 23:36:41.020159 ignition[1180]: GET result: OK Jan 13 23:36:41.020220 ignition[1180]: config has been read from IMDS userdata Jan 13 23:36:41.020234 ignition[1180]: parsing config with SHA512: 2f8f7e2ee0371adff873081f4863988a74bd17a9ec9a046d8715edb5ccc686f5ef574133ef805596f1c309c59fb8f630d21bc858b362bde8f25b0030feb59743 Jan 13 23:36:41.026680 unknown[1180]: fetched base config from "system" Jan 13 23:36:41.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:41.026905 ignition[1180]: fetch: fetch complete Jan 13 23:36:41.026686 unknown[1180]: fetched base config from "system" Jan 13 23:36:41.026910 ignition[1180]: fetch: fetch passed Jan 13 23:36:41.026690 unknown[1180]: fetched user config from "azure" Jan 13 23:36:41.026940 ignition[1180]: Ignition finished successfully Jan 13 23:36:41.028620 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 23:36:41.034217 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 23:36:41.062948 ignition[1186]: Ignition 2.24.0 Jan 13 23:36:41.068614 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 23:36:41.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:41.062953 ignition[1186]: Stage: kargs Jan 13 23:36:41.076274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 23:36:41.063139 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:41.063145 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:41.063628 ignition[1186]: kargs: kargs passed Jan 13 23:36:41.063668 ignition[1186]: Ignition finished successfully Jan 13 23:36:41.109203 ignition[1193]: Ignition 2.24.0 Jan 13 23:36:41.109221 ignition[1193]: Stage: disks Jan 13 23:36:41.113443 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 23:36:41.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:41.109460 ignition[1193]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:41.120557 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 23:36:41.109467 ignition[1193]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:41.129111 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 23:36:41.110104 ignition[1193]: disks: disks passed Jan 13 23:36:41.138629 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:36:41.110150 ignition[1193]: Ignition finished successfully Jan 13 23:36:41.147680 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:36:41.156365 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:36:41.166230 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 23:36:41.293298 systemd-fsck[1201]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 13 23:36:41.302113 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 23:36:41.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:41.310448 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 23:36:41.624350 kernel: EXT4-fs (sda9): mounted filesystem b1eb7e1a-01a1-41b0-9b3c-5a37b4853d4d r/w with ordered data mode. Quota mode: none. Jan 13 23:36:41.624661 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 23:36:41.628740 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 23:36:41.669894 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:36:41.683921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 23:36:41.692169 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 23:36:41.703528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 23:36:41.703565 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:36:41.710294 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 23:36:41.736123 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 23:36:41.755085 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1215) Jan 13 23:36:41.755104 kernel: BTRFS info (device sda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:36:41.760570 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:36:41.770936 kernel: BTRFS info (device sda6): turning on async discard Jan 13 23:36:41.771082 kernel: BTRFS info (device sda6): enabling free space tree Jan 13 23:36:41.772161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:36:42.265168 coreos-metadata[1217]: Jan 13 23:36:42.264 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 13 23:36:42.271437 coreos-metadata[1217]: Jan 13 23:36:42.271 INFO Fetch successful Jan 13 23:36:42.275172 coreos-metadata[1217]: Jan 13 23:36:42.271 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 13 23:36:42.283617 coreos-metadata[1217]: Jan 13 23:36:42.283 INFO Fetch successful Jan 13 23:36:42.298401 coreos-metadata[1217]: Jan 13 23:36:42.298 INFO wrote hostname ci-4578.0.0-p-c34b1ae5c8 to /sysroot/etc/hostname Jan 13 23:36:42.306197 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 23:36:42.331889 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 13 23:36:42.331916 kernel: audit: type=1130 audit(1768347402.310:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:42.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.616999 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 23:36:43.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.622874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 23:36:43.646242 kernel: audit: type=1130 audit(1768347403.621:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.654915 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 23:36:43.684009 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 23:36:43.694450 kernel: BTRFS info (device sda6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:36:43.703645 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 23:36:43.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.723354 ignition[1321]: INFO : Ignition 2.24.0 Jan 13 23:36:43.723354 ignition[1321]: INFO : Stage: mount Jan 13 23:36:43.723354 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:43.723354 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:43.758116 kernel: audit: type=1130 audit(1768347403.710:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.758138 kernel: audit: type=1130 audit(1768347403.733:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:43.729904 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 23:36:43.762420 ignition[1321]: INFO : mount: mount passed Jan 13 23:36:43.762420 ignition[1321]: INFO : Ignition finished successfully Jan 13 23:36:43.735776 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 23:36:43.778570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:36:43.799357 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1330) Jan 13 23:36:43.810197 kernel: BTRFS info (device sda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:36:43.810210 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:36:43.820955 kernel: BTRFS info (device sda6): turning on async discard Jan 13 23:36:43.820969 kernel: BTRFS info (device sda6): enabling free space tree Jan 13 23:36:43.822473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:36:43.851540 ignition[1347]: INFO : Ignition 2.24.0 Jan 13 23:36:43.851540 ignition[1347]: INFO : Stage: files Jan 13 23:36:43.857827 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:43.857827 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:43.857827 ignition[1347]: DEBUG : files: compiled without relabeling support, skipping Jan 13 23:36:43.872027 ignition[1347]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 23:36:43.872027 ignition[1347]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 23:36:43.968303 ignition[1347]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 23:36:43.973731 ignition[1347]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 23:36:43.973731 ignition[1347]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 23:36:43.968768 unknown[1347]: wrote ssh authorized keys file for user: core Jan 13 23:36:43.999468 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:36:44.007394 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 13 23:36:44.065790 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 23:36:44.417510 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:36:44.426283 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:36:44.515422 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:36:44.515422 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:36:44.515422 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 13 23:36:44.901849 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 23:36:46.528357 ignition[1347]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:36:46.528357 ignition[1347]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 23:36:46.677407 ignition[1347]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:36:46.928872 ignition[1347]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:36:46.928872 ignition[1347]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 23:36:46.928872 ignition[1347]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 23:36:46.974175 kernel: audit: type=1130 audit(1768347406.941:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:46.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:46.974248 ignition[1347]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 23:36:46.974248 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:36:46.974248 ignition[1347]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:36:46.974248 ignition[1347]: INFO : files: files passed Jan 13 23:36:46.974248 ignition[1347]: INFO : Ignition finished successfully Jan 13 23:36:47.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:46.930929 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 23:36:47.034976 kernel: audit: type=1130 audit(1768347407.002:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.035001 kernel: audit: type=1131 audit(1768347407.002:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:46.942968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 23:36:46.963726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 23:36:46.988871 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 23:36:47.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.066019 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:36:46.988970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 23:36:47.081963 kernel: audit: type=1130 audit(1768347407.050:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.081987 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:36:47.081987 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:36:47.043765 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:36:47.066888 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 23:36:47.082506 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 23:36:47.148019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 23:36:47.148134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 23:36:47.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.158051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 23:36:47.201833 kernel: audit: type=1130 audit(1768347407.156:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.201861 kernel: audit: type=1131 audit(1768347407.156:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.190686 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 23:36:47.195211 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 23:36:47.196115 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 23:36:47.232608 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:36:47.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.239305 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 23:36:47.264115 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:36:47.264289 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:36:47.274251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:36:47.283943 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 23:36:47.292434 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 23:36:47.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.292569 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:36:47.304292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 23:36:47.308616 systemd[1]: Stopped target basic.target - Basic System. Jan 13 23:36:47.316683 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 23:36:47.325310 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:36:47.333575 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 23:36:47.342587 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:36:47.351775 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 23:36:47.360401 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:36:47.369633 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 23:36:47.377795 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 23:36:47.386790 systemd[1]: Stopped target swap.target - Swaps. Jan 13 23:36:47.424607 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 13 23:36:47.424631 kernel: audit: type=1131 audit(1768347407.401:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.394113 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 23:36:47.394229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:36:47.405490 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:36:47.432729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:36:47.442107 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 23:36:47.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.442354 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:36:47.478010 kernel: audit: type=1131 audit(1768347407.459:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.451937 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 23:36:47.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.452056 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 23:36:47.517595 kernel: audit: type=1131 audit(1768347407.482:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.517618 kernel: audit: type=1131 audit(1768347407.502:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.473463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 23:36:47.539063 kernel: audit: type=1131 audit(1768347407.522:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.473633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:36:47.483371 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 23:36:47.483459 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 23:36:47.503071 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 23:36:47.503164 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 23:36:47.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.546553 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 23:36:47.616298 kernel: audit: type=1131 audit(1768347407.579:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.616325 kernel: audit: type=1131 audit(1768347407.599:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.616410 ignition[1402]: INFO : Ignition 2.24.0 Jan 13 23:36:47.616410 ignition[1402]: INFO : Stage: umount Jan 13 23:36:47.616410 ignition[1402]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:36:47.616410 ignition[1402]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 13 23:36:47.616410 ignition[1402]: INFO : umount: umount passed Jan 13 23:36:47.616410 ignition[1402]: INFO : Ignition finished successfully Jan 13 23:36:47.695753 kernel: audit: type=1131 audit(1768347407.620:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.695779 kernel: audit: type=1131 audit(1768347407.643:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.695797 kernel: audit: type=1131 audit(1768347407.668:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.557543 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 23:36:47.564956 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 23:36:47.565122 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:36:47.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.580740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 23:36:47.580851 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:36:47.600563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 23:36:47.600670 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:36:47.636688 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 23:36:47.636778 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 23:36:47.644950 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 23:36:47.645055 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 23:36:47.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.669538 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 23:36:47.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.669618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 23:36:47.687868 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 23:36:47.687934 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 23:36:47.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.692144 systemd[1]: Stopped target network.target - Network. Jan 13 23:36:47.699481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 23:36:47.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.699546 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:36:47.713286 systemd[1]: Stopped target paths.target - Path Units. Jan 13 23:36:47.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.722006 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 23:36:47.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.857000 audit: BPF prog-id=9 op=UNLOAD Jan 13 23:36:47.857000 audit: BPF prog-id=6 op=UNLOAD Jan 13 23:36:47.726034 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:36:47.731032 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 23:36:47.739199 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 23:36:47.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.746987 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 23:36:47.747041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:36:47.755414 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 23:36:47.755442 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:36:47.763547 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 13 23:36:47.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.763564 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:36:47.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.772013 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 23:36:47.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.772069 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 23:36:47.779752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 23:36:47.779782 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 23:36:47.787846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 23:36:47.796049 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 23:36:47.810562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 23:36:47.811209 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 23:36:47.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.812371 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 23:36:47.822495 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 23:36:47.822594 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 23:36:48.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.835817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 23:36:47.835934 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 23:36:48.044251 kernel: hv_netvsc 002248bb-d71a-0022-48bb-d71a002248bb eth0: Data path switched from VF: enP44730s1 Jan 13 23:36:48.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.846132 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 23:36:48.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.846219 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 23:36:47.856212 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 13 23:36:48.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.863531 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 23:36:48.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.863596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:36:48.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.874689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 23:36:48.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.874748 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 23:36:48.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.883657 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 23:36:48.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:48.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.896693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 23:36:47.896775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:36:47.914461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 23:36:48.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:47.914523 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:36:47.923912 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 23:36:47.923960 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 23:36:47.933218 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:36:47.971551 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 23:36:47.971739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:36:47.983603 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 23:36:47.983641 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 23:36:47.995783 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 23:36:47.995820 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:36:48.006843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 23:36:48.006899 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:36:48.029362 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 23:36:48.029429 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 23:36:48.044320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 23:36:48.044405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:36:48.056589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 23:36:48.069080 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 13 23:36:48.069183 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:36:48.076849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 23:36:48.260407 systemd-journald[501]: Received SIGTERM from PID 1 (systemd). Jan 13 23:36:48.076913 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:36:48.087602 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 23:36:48.087656 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:36:48.098318 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 23:36:48.098381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:36:48.109918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:36:48.109973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:48.119084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 23:36:48.119182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 23:36:48.137875 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 23:36:48.138014 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 23:36:48.147605 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 23:36:48.156243 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 23:36:48.182045 systemd[1]: Switching root. Jan 13 23:36:48.327341 systemd-journald[501]: Journal stopped Jan 13 23:36:52.938435 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 23:36:52.938460 kernel: SELinux: policy capability open_perms=1 Jan 13 23:36:52.938471 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 23:36:52.938478 kernel: SELinux: policy capability always_check_network=0 Jan 13 23:36:52.938488 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 23:36:52.938496 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 23:36:52.938505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 23:36:52.938512 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 23:36:52.938519 kernel: SELinux: policy capability userspace_initial_context=0 Jan 13 23:36:52.938530 systemd[1]: Successfully loaded SELinux policy in 151.060ms. Jan 13 23:36:52.938539 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.563ms. Jan 13 23:36:52.938546 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:36:52.938552 systemd[1]: Detected virtualization microsoft. Jan 13 23:36:52.938558 systemd[1]: Detected architecture arm64. Jan 13 23:36:52.938566 systemd[1]: Detected first boot. Jan 13 23:36:52.938573 systemd[1]: Hostname set to . Jan 13 23:36:52.938579 systemd[1]: Initializing machine ID from random generator. Jan 13 23:36:52.938586 zram_generator::config[1444]: No configuration found. Jan 13 23:36:52.938592 kernel: NET: Registered PF_VSOCK protocol family Jan 13 23:36:52.938600 systemd[1]: Populated /etc with preset unit settings. Jan 13 23:36:52.938606 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 23:36:52.938613 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 23:36:52.938619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 23:36:52.938626 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 23:36:52.938632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 23:36:52.938640 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 23:36:52.938646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 23:36:52.938653 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 23:36:52.938660 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 23:36:52.938666 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 23:36:52.938673 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 23:36:52.938680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:36:52.938687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:36:52.938694 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 23:36:52.938700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 23:36:52.938707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 23:36:52.938713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:36:52.938720 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 23:36:52.938727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:36:52.938733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:36:52.938742 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 23:36:52.938748 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 23:36:52.938755 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 23:36:52.938761 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 23:36:52.938769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:36:52.938775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:36:52.938782 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 13 23:36:52.938788 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:36:52.938795 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:36:52.938801 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 23:36:52.938808 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 23:36:52.938817 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 13 23:36:52.938823 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 13 23:36:52.938830 kernel: audit: type=1335 audit(1768347412.410:103): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 13 23:36:52.938837 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:36:52.938844 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 13 23:36:52.938850 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:36:52.938857 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 13 23:36:52.938864 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 13 23:36:52.938870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:36:52.938877 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:36:52.938884 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 23:36:52.938891 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 23:36:52.938897 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 23:36:52.938904 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 23:36:52.938911 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 23:36:52.938917 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 23:36:52.938925 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 23:36:52.938932 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 23:36:52.938939 systemd[1]: Reached target machines.target - Containers. Jan 13 23:36:52.938945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 23:36:52.938952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:36:52.938959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:36:52.938966 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 23:36:52.938974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:36:52.938980 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:36:52.938987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:36:52.938993 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 23:36:52.939000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:36:52.939007 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 23:36:52.939013 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 23:36:52.939021 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 23:36:52.939028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 23:36:52.939035 kernel: audit: type=1131 audit(1768347412.779:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:52.939041 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 23:36:52.939048 kernel: audit: type=1131 audit(1768347412.804:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:52.939056 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:36:52.939062 kernel: audit: type=1334 audit(1768347412.829:106): prog-id=14 op=UNLOAD Jan 13 23:36:52.939068 kernel: audit: type=1334 audit(1768347412.829:107): prog-id=13 op=UNLOAD Jan 13 23:36:52.939074 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:36:52.939081 kernel: audit: type=1334 audit(1768347412.834:108): prog-id=15 op=LOAD Jan 13 23:36:52.939087 kernel: audit: type=1334 audit(1768347412.834:109): prog-id=16 op=LOAD Jan 13 23:36:52.939093 kernel: audit: type=1334 audit(1768347412.834:110): prog-id=17 op=LOAD Jan 13 23:36:52.939100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:36:52.939107 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:36:52.939114 kernel: fuse: init (API version 7.41) Jan 13 23:36:52.939120 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 23:36:52.939139 systemd-journald[1527]: Collecting audit messages is enabled. Jan 13 23:36:52.939154 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 13 23:36:52.939161 kernel: ACPI: bus type drm_connector registered Jan 13 23:36:52.939169 systemd-journald[1527]: Journal started Jan 13 23:36:52.939184 systemd-journald[1527]: Runtime Journal (/run/log/journal/49a0362114814fb5bd0c96278401eb49) is 8M, max 78.3M, 70.3M free. Jan 13 23:36:52.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:52.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:52.829000 audit: BPF prog-id=14 op=UNLOAD Jan 13 23:36:52.829000 audit: BPF prog-id=13 op=UNLOAD Jan 13 23:36:52.834000 audit: BPF prog-id=15 op=LOAD Jan 13 23:36:52.834000 audit: BPF prog-id=16 op=LOAD Jan 13 23:36:52.834000 audit: BPF prog-id=17 op=LOAD Jan 13 23:36:52.099111 systemd[1]: Queued start job for default target multi-user.target. Jan 13 23:36:52.106873 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 23:36:52.107316 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 23:36:52.107622 systemd[1]: systemd-journald.service: Consumed 2.481s CPU time. Jan 13 23:36:52.944370 kernel: audit: type=1305 audit(1768347412.936:111): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:36:52.944434 kernel: audit: type=1300 audit(1768347412.936:111): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffed451680 a2=4000 a3=0 items=0 ppid=1 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:36:52.936000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:36:52.936000 audit[1527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffed451680 a2=4000 a3=0 items=0 ppid=1 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:36:52.936000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 13 23:36:52.987420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:36:52.998302 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:36:52.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:52.999211 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 23:36:53.004539 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 23:36:53.009236 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 23:36:53.013387 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 23:36:53.018190 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 23:36:53.022858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 23:36:53.028689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:36:53.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.034265 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 23:36:53.034602 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 23:36:53.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.042285 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:36:53.042454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:36:53.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.049104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:36:53.049256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:36:53.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.054153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:36:53.055432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:36:53.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.061665 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 23:36:53.061808 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 23:36:53.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.069453 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:36:53.069581 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:36:53.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.074538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:36:53.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.079619 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:36:53.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.085906 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 23:36:53.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.092441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 13 23:36:53.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.098991 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 23:36:53.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.112374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:36:53.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.120361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:36:53.125630 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 13 23:36:53.132094 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 23:36:53.144357 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 23:36:53.148993 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 23:36:53.149025 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:36:53.155612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 13 23:36:53.161128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:36:53.161228 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:36:53.162264 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 23:36:53.167587 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 23:36:53.172380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:36:53.173875 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 23:36:53.178576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:36:53.179579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:36:53.200488 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 23:36:53.211471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:36:53.217727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 23:36:53.224277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 23:36:53.231159 systemd-journald[1527]: Time spent on flushing to /var/log/journal/49a0362114814fb5bd0c96278401eb49 is 13.330ms for 1085 entries. Jan 13 23:36:53.231159 systemd-journald[1527]: System Journal (/var/log/journal/49a0362114814fb5bd0c96278401eb49) is 8M, max 2.2G, 2.2G free. Jan 13 23:36:53.277588 systemd-journald[1527]: Received client request to flush runtime journal. Jan 13 23:36:53.277640 kernel: loop1: detected capacity change from 0 to 207008 Jan 13 23:36:53.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.231586 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 23:36:53.243070 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 23:36:53.248952 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 13 23:36:53.259751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:36:53.279680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 23:36:53.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.313583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 23:36:53.317729 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 13 23:36:53.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.346355 kernel: loop2: detected capacity change from 0 to 45344 Jan 13 23:36:53.354645 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Jan 13 23:36:53.354657 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Jan 13 23:36:53.359474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:36:53.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.368637 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 23:36:53.499425 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 23:36:53.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.504000 audit: BPF prog-id=18 op=LOAD Jan 13 23:36:53.504000 audit: BPF prog-id=19 op=LOAD Jan 13 23:36:53.504000 audit: BPF prog-id=20 op=LOAD Jan 13 23:36:53.509523 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 13 23:36:53.515000 audit: BPF prog-id=21 op=LOAD Jan 13 23:36:53.519500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:36:53.529150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:36:53.540000 audit: BPF prog-id=22 op=LOAD Jan 13 23:36:53.540000 audit: BPF prog-id=23 op=LOAD Jan 13 23:36:53.540000 audit: BPF prog-id=24 op=LOAD Jan 13 23:36:53.544453 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 13 23:36:53.553000 audit: BPF prog-id=25 op=LOAD Jan 13 23:36:53.555000 audit: BPF prog-id=26 op=LOAD Jan 13 23:36:53.555000 audit: BPF prog-id=27 op=LOAD Jan 13 23:36:53.557042 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jan 13 23:36:53.557056 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jan 13 23:36:53.558523 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 23:36:53.565485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:36:53.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.605522 systemd-nsresourced[1607]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 13 23:36:53.607062 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 13 23:36:53.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.623685 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 23:36:53.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.676223 systemd-oomd[1604]: No swap; memory pressure usage will be degraded Jan 13 23:36:53.676778 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 13 23:36:53.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.707705 systemd-resolved[1605]: Positive Trust Anchors: Jan 13 23:36:53.707723 systemd-resolved[1605]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:36:53.707726 systemd-resolved[1605]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:36:53.707745 systemd-resolved[1605]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:36:53.715358 kernel: loop3: detected capacity change from 0 to 100192 Jan 13 23:36:53.742353 systemd-resolved[1605]: Using system hostname 'ci-4578.0.0-p-c34b1ae5c8'. Jan 13 23:36:53.743545 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:36:53.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.748572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:36:53.848128 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 23:36:53.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:53.852000 audit: BPF prog-id=8 op=UNLOAD Jan 13 23:36:53.852000 audit: BPF prog-id=7 op=UNLOAD Jan 13 23:36:53.852000 audit: BPF prog-id=28 op=LOAD Jan 13 23:36:53.852000 audit: BPF prog-id=29 op=LOAD Jan 13 23:36:53.854854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:36:53.880923 systemd-udevd[1627]: Using default interface naming scheme 'v257'. Jan 13 23:36:54.084922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:36:54.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.092000 audit: BPF prog-id=30 op=LOAD Jan 13 23:36:54.094866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:36:54.160200 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 23:36:54.167378 kernel: loop4: detected capacity change from 0 to 48424 Jan 13 23:36:54.200361 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 23:36:54.208240 systemd-networkd[1637]: lo: Link UP Jan 13 23:36:54.208247 systemd-networkd[1637]: lo: Gained carrier Jan 13 23:36:54.210307 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:36:54.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.222128 systemd[1]: Reached target network.target - Network. Jan 13 23:36:54.229416 kernel: hv_vmbus: registering driver hv_balloon Jan 13 23:36:54.229495 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 13 23:36:54.229909 systemd-networkd[1637]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:36:54.230362 systemd-networkd[1637]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:36:54.235336 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 13 23:36:54.232477 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 13 23:36:54.252989 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 23:36:54.257659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 13 23:36:54.292357 kernel: hv_vmbus: registering driver hyperv_fb Jan 13 23:36:54.292457 kernel: mlx5_core aeba:00:02.0 enP44730s1: Link up Jan 13 23:36:54.292687 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 13 23:36:54.293378 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 13 23:36:54.324352 kernel: hv_netvsc 002248bb-d71a-0022-48bb-d71a002248bb eth0: Data path switched to VF: enP44730s1 Jan 13 23:36:54.325317 systemd-networkd[1637]: enP44730s1: Link UP Jan 13 23:36:54.325463 systemd-networkd[1637]: eth0: Link UP Jan 13 23:36:54.325466 systemd-networkd[1637]: eth0: Gained carrier Jan 13 23:36:54.325485 systemd-networkd[1637]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:36:54.334639 systemd-networkd[1637]: enP44730s1: Gained carrier Jan 13 23:36:54.346638 kernel: Console: switching to colour dummy device 80x25 Jan 13 23:36:54.346764 systemd-networkd[1637]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 13 23:36:54.349369 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 23:36:54.358587 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 13 23:36:54.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.372362 kernel: MACsec IEEE 802.1AE Jan 13 23:36:54.387148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:36:54.400090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:36:54.400308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:54.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.407696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:36:54.575384 kernel: loop5: detected capacity change from 0 to 207008 Jan 13 23:36:54.598425 kernel: loop6: detected capacity change from 0 to 45344 Jan 13 23:36:54.615358 kernel: loop7: detected capacity change from 0 to 100192 Jan 13 23:36:54.628482 kernel: loop1: detected capacity change from 0 to 48424 Jan 13 23:36:54.635018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 13 23:36:54.642501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 23:36:54.651882 (sd-merge)[1724]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 13 23:36:54.654741 (sd-merge)[1724]: Merged extensions into '/usr'. Jan 13 23:36:54.658522 systemd[1]: Reload requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 23:36:54.658646 systemd[1]: Reloading... Jan 13 23:36:54.724357 zram_generator::config[1794]: No configuration found. Jan 13 23:36:54.895017 systemd[1]: Reloading finished in 235 ms. Jan 13 23:36:54.923718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:36:54.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.930083 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 23:36:54.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.935809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 23:36:54.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:54.951479 systemd[1]: Starting ensure-sysext.service... Jan 13 23:36:54.958517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:36:54.970000 audit: BPF prog-id=31 op=LOAD Jan 13 23:36:54.970000 audit: BPF prog-id=25 op=UNLOAD Jan 13 23:36:54.970000 audit: BPF prog-id=32 op=LOAD Jan 13 23:36:54.970000 audit: BPF prog-id=33 op=LOAD Jan 13 23:36:54.970000 audit: BPF prog-id=26 op=UNLOAD Jan 13 23:36:54.970000 audit: BPF prog-id=27 op=UNLOAD Jan 13 23:36:54.971000 audit: BPF prog-id=34 op=LOAD Jan 13 23:36:54.971000 audit: BPF prog-id=18 op=UNLOAD Jan 13 23:36:54.971000 audit: BPF prog-id=35 op=LOAD Jan 13 23:36:54.971000 audit: BPF prog-id=36 op=LOAD Jan 13 23:36:54.971000 audit: BPF prog-id=19 op=UNLOAD Jan 13 23:36:54.971000 audit: BPF prog-id=20 op=UNLOAD Jan 13 23:36:54.971000 audit: BPF prog-id=37 op=LOAD Jan 13 23:36:54.971000 audit: BPF prog-id=22 op=UNLOAD Jan 13 23:36:54.972000 audit: BPF prog-id=38 op=LOAD Jan 13 23:36:54.972000 audit: BPF prog-id=39 op=LOAD Jan 13 23:36:54.972000 audit: BPF prog-id=23 op=UNLOAD Jan 13 23:36:54.972000 audit: BPF prog-id=24 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=40 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=21 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=41 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=15 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=42 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=43 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=16 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=17 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=44 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=45 op=LOAD Jan 13 23:36:54.973000 audit: BPF prog-id=28 op=UNLOAD Jan 13 23:36:54.973000 audit: BPF prog-id=29 op=UNLOAD Jan 13 23:36:54.974000 audit: BPF prog-id=46 op=LOAD Jan 13 23:36:54.974000 audit: BPF prog-id=30 op=UNLOAD Jan 13 23:36:54.980489 systemd[1]: Reload requested from client PID 1851 ('systemctl') (unit ensure-sysext.service)... Jan 13 23:36:54.980503 systemd[1]: Reloading... Jan 13 23:36:54.982273 systemd-tmpfiles[1852]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 13 23:36:54.982774 systemd-tmpfiles[1852]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 13 23:36:54.983188 systemd-tmpfiles[1852]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 23:36:54.984626 systemd-tmpfiles[1852]: ACLs are not supported, ignoring. Jan 13 23:36:54.984785 systemd-tmpfiles[1852]: ACLs are not supported, ignoring. Jan 13 23:36:55.004998 systemd-tmpfiles[1852]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:36:55.005394 systemd-tmpfiles[1852]: Skipping /boot Jan 13 23:36:55.012709 systemd-tmpfiles[1852]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:36:55.012722 systemd-tmpfiles[1852]: Skipping /boot Jan 13 23:36:55.047588 zram_generator::config[1884]: No configuration found. Jan 13 23:36:55.204922 systemd[1]: Reloading finished in 224 ms. Jan 13 23:36:55.215000 audit: BPF prog-id=47 op=LOAD Jan 13 23:36:55.215000 audit: BPF prog-id=48 op=LOAD Jan 13 23:36:55.215000 audit: BPF prog-id=44 op=UNLOAD Jan 13 23:36:55.215000 audit: BPF prog-id=45 op=UNLOAD Jan 13 23:36:55.216000 audit: BPF prog-id=49 op=LOAD Jan 13 23:36:55.216000 audit: BPF prog-id=41 op=UNLOAD Jan 13 23:36:55.216000 audit: BPF prog-id=50 op=LOAD Jan 13 23:36:55.216000 audit: BPF prog-id=51 op=LOAD Jan 13 23:36:55.216000 audit: BPF prog-id=42 op=UNLOAD Jan 13 23:36:55.216000 audit: BPF prog-id=43 op=UNLOAD Jan 13 23:36:55.216000 audit: BPF prog-id=52 op=LOAD Jan 13 23:36:55.216000 audit: BPF prog-id=37 op=UNLOAD Jan 13 23:36:55.217000 audit: BPF prog-id=53 op=LOAD Jan 13 23:36:55.217000 audit: BPF prog-id=54 op=LOAD Jan 13 23:36:55.217000 audit: BPF prog-id=38 op=UNLOAD Jan 13 23:36:55.217000 audit: BPF prog-id=39 op=UNLOAD Jan 13 23:36:55.217000 audit: BPF prog-id=55 op=LOAD Jan 13 23:36:55.217000 audit: BPF prog-id=34 op=UNLOAD Jan 13 23:36:55.217000 audit: BPF prog-id=56 op=LOAD Jan 13 23:36:55.217000 audit: BPF prog-id=57 op=LOAD Jan 13 23:36:55.217000 audit: BPF prog-id=35 op=UNLOAD Jan 13 23:36:55.217000 audit: BPF prog-id=36 op=UNLOAD Jan 13 23:36:55.218000 audit: BPF prog-id=58 op=LOAD Jan 13 23:36:55.218000 audit: BPF prog-id=31 op=UNLOAD Jan 13 23:36:55.218000 audit: BPF prog-id=59 op=LOAD Jan 13 23:36:55.218000 audit: BPF prog-id=60 op=LOAD Jan 13 23:36:55.218000 audit: BPF prog-id=32 op=UNLOAD Jan 13 23:36:55.218000 audit: BPF prog-id=33 op=UNLOAD Jan 13 23:36:55.218000 audit: BPF prog-id=61 op=LOAD Jan 13 23:36:55.218000 audit: BPF prog-id=46 op=UNLOAD Jan 13 23:36:55.218000 audit: BPF prog-id=62 op=LOAD Jan 13 23:36:55.218000 audit: BPF prog-id=40 op=UNLOAD Jan 13 23:36:55.227408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:36:55.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.239557 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:36:55.250138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 23:36:55.256551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 23:36:55.262550 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 23:36:55.276605 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 23:36:55.286740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:36:55.285000 audit[1947]: SYSTEM_BOOT pid=1947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.288053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:36:55.294528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:36:55.303072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:36:55.307665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:36:55.307907 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:36:55.308036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:36:55.309689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:36:55.310216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:36:55.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.316566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:36:55.316737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:36:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.322144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:36:55.322309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:36:55.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.336625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:36:55.340576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:36:55.348452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:36:55.356543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:36:55.360966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:36:55.361123 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:36:55.361195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:36:55.362212 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 23:36:55.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.368126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:36:55.368304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:36:55.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.375001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:36:55.375703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:36:55.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.382521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:36:55.383717 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:36:55.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.398048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 23:36:55.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.407040 systemd[1]: Finished ensure-sysext.service. Jan 13 23:36:55.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.412227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:36:55.413281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:36:55.420789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:36:55.427041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:36:55.433510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:36:55.437946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:36:55.438034 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:36:55.438066 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:36:55.438106 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 23:36:55.442990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:36:55.444403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:36:55.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.449533 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:36:55.449695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:36:55.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.454420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:36:55.454567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:36:55.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.459927 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:36:55.460088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:36:55.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:36:55.467517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:36:55.467611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:36:55.504284 augenrules[1990]: No rules Jan 13 23:36:55.502000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 13 23:36:55.502000 audit[1990]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdae476d0 a2=420 a3=0 items=0 ppid=1943 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:36:55.502000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:36:55.505701 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:36:55.505965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:36:56.062609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 23:36:56.067994 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 23:36:56.201539 systemd-networkd[1637]: eth0: Gained IPv6LL Jan 13 23:36:56.203523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 23:36:56.208934 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 23:37:00.555458 ldconfig[1945]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 23:37:00.574483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 23:37:00.582006 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 23:37:00.616670 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 23:37:00.621632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:37:00.626029 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 23:37:00.631073 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 23:37:00.636404 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 23:37:00.640832 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 23:37:00.646002 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 13 23:37:00.651116 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 13 23:37:00.655652 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 23:37:00.660641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 23:37:00.660672 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:37:00.664429 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:37:00.684471 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 23:37:00.690442 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 23:37:00.695937 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 13 23:37:00.701489 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 13 23:37:00.706640 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 13 23:37:00.718976 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 23:37:00.723489 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 13 23:37:00.728859 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 23:37:00.733649 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:37:00.737847 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:37:00.741707 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:37:00.741736 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:37:00.759408 systemd[1]: Starting chronyd.service - NTP client/server... Jan 13 23:37:00.774799 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 23:37:00.782487 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 23:37:00.789490 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 23:37:00.796490 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 23:37:00.803454 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 23:37:00.807428 chronyd[2003]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 13 23:37:00.817000 jq[2011]: false Jan 13 23:37:00.817351 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 23:37:00.821655 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 23:37:00.823030 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 13 23:37:00.828155 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 13 23:37:00.828545 chronyd[2003]: Timezone right/UTC failed leap second check, ignoring Jan 13 23:37:00.828690 chronyd[2003]: Loaded seccomp filter (level 2) Jan 13 23:37:00.831523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:00.836680 KVP[2013]: KVP starting; pid is:2013 Jan 13 23:37:00.839355 KVP[2013]: KVP LIC Version: 3.1 Jan 13 23:37:00.840387 kernel: hv_utils: KVP IC version 4.0 Jan 13 23:37:00.843223 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 23:37:00.851508 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 23:37:00.858638 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 23:37:00.865485 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 23:37:00.872214 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 23:37:00.881874 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 23:37:00.887619 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 23:37:00.889515 extend-filesystems[2012]: Found /dev/sda6 Jan 13 23:37:00.888075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 23:37:00.893206 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 23:37:00.905257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 23:37:00.912282 systemd[1]: Started chronyd.service - NTP client/server. Jan 13 23:37:00.914541 jq[2038]: true Jan 13 23:37:00.918436 extend-filesystems[2012]: Found /dev/sda9 Jan 13 23:37:00.918702 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 23:37:00.929538 extend-filesystems[2012]: Checking size of /dev/sda9 Jan 13 23:37:00.930099 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 23:37:00.935542 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 23:37:00.936436 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 23:37:00.936616 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 23:37:00.942611 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 23:37:00.942808 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 23:37:00.961510 jq[2051]: true Jan 13 23:37:00.970275 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 23:37:00.991305 update_engine[2033]: I20260113 23:37:00.991222 2033 main.cc:92] Flatcar Update Engine starting Jan 13 23:37:00.999350 extend-filesystems[2012]: Resized partition /dev/sda9 Jan 13 23:37:01.016850 extend-filesystems[2071]: resize2fs 1.47.3 (8-Jul-2025) Jan 13 23:37:01.036360 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 13 23:37:01.036430 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 13 23:37:01.044786 tar[2049]: linux-arm64/LICENSE Jan 13 23:37:01.067044 tar[2049]: linux-arm64/helm Jan 13 23:37:01.049019 systemd-logind[2030]: New seat seat0. Jan 13 23:37:01.067376 systemd-logind[2030]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 13 23:37:01.067640 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 23:37:01.083573 extend-filesystems[2071]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 23:37:01.083573 extend-filesystems[2071]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 13 23:37:01.083573 extend-filesystems[2071]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 13 23:37:01.181733 extend-filesystems[2012]: Resized filesystem in /dev/sda9 Jan 13 23:37:01.201377 update_engine[2033]: I20260113 23:37:01.153851 2033 update_check_scheduler.cc:74] Next update check in 3m18s Jan 13 23:37:01.085304 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 23:37:01.144724 dbus-daemon[2006]: [system] SELinux support is enabled Jan 13 23:37:01.201693 bash[2085]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:37:01.085840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 23:37:01.191083 dbus-daemon[2006]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 23:37:01.133391 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 23:37:01.155801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 23:37:01.172266 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 23:37:01.172385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 23:37:01.172406 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 23:37:01.182757 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 23:37:01.182780 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 23:37:01.190596 systemd[1]: Started update-engine.service - Update Engine. Jan 13 23:37:01.207807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 23:37:01.229403 coreos-metadata[2005]: Jan 13 23:37:01.228 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 13 23:37:01.233438 coreos-metadata[2005]: Jan 13 23:37:01.232 INFO Fetch successful Jan 13 23:37:01.233438 coreos-metadata[2005]: Jan 13 23:37:01.232 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 13 23:37:01.237211 coreos-metadata[2005]: Jan 13 23:37:01.237 INFO Fetch successful Jan 13 23:37:01.237211 coreos-metadata[2005]: Jan 13 23:37:01.237 INFO Fetching http://168.63.129.16/machine/fec1f53e-58ef-4cf5-89e9-108b86736b69/9a1435c0%2Dd322%2D4adc%2D9124%2Dce1ecb16829f.%5Fci%2D4578.0.0%2Dp%2Dc34b1ae5c8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 13 23:37:01.240361 coreos-metadata[2005]: Jan 13 23:37:01.239 INFO Fetch successful Jan 13 23:37:01.240361 coreos-metadata[2005]: Jan 13 23:37:01.239 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 13 23:37:01.248044 coreos-metadata[2005]: Jan 13 23:37:01.248 INFO Fetch successful Jan 13 23:37:01.304057 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 23:37:01.319133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 23:37:01.513434 locksmithd[2139]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 23:37:01.572584 tar[2049]: linux-arm64/README.md Jan 13 23:37:01.589428 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 23:37:01.593659 containerd[2052]: time="2026-01-13T23:37:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 13 23:37:01.595645 containerd[2052]: time="2026-01-13T23:37:01.595597080Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 13 23:37:01.598972 sshd_keygen[2035]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609690616Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="600.272µs" Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609728624Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609767936Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609776912Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609890064Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609900696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609962976Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.609971392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.610154080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.610163624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.610173352Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610355 containerd[2052]: time="2026-01-13T23:37:01.610182376Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610580 containerd[2052]: time="2026-01-13T23:37:01.610297544Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.610580 containerd[2052]: time="2026-01-13T23:37:01.610305520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 13 23:37:01.611516 containerd[2052]: time="2026-01-13T23:37:01.611479376Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.611701 containerd[2052]: time="2026-01-13T23:37:01.611681072Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.611721 containerd[2052]: time="2026-01-13T23:37:01.611711120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:37:01.611721 containerd[2052]: time="2026-01-13T23:37:01.611718384Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 13 23:37:01.611769 containerd[2052]: time="2026-01-13T23:37:01.611734400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 13 23:37:01.612440 containerd[2052]: time="2026-01-13T23:37:01.611870056Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 13 23:37:01.612440 containerd[2052]: time="2026-01-13T23:37:01.611926640Z" level=info msg="metadata content store policy set" policy=shared Jan 13 23:37:01.619114 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 23:37:01.626810 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 23:37:01.633522 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 13 23:37:01.643019 containerd[2052]: time="2026-01-13T23:37:01.642978912Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643124360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643863984Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643890944Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643904144Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643913592Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643922392Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643928184Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643935568Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 13 23:37:01.643955 containerd[2052]: time="2026-01-13T23:37:01.643943616Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 13 23:37:01.645262 containerd[2052]: time="2026-01-13T23:37:01.644133704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645385152Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645409672Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645427048Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645590088Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645608640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645620112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645627240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645634664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645640664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 13 23:37:01.645666 containerd[2052]: time="2026-01-13T23:37:01.645649864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 13 23:37:01.645915 containerd[2052]: time="2026-01-13T23:37:01.645656528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 13 23:37:01.645915 containerd[2052]: time="2026-01-13T23:37:01.645855648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 13 23:37:01.645915 containerd[2052]: time="2026-01-13T23:37:01.645868632Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 13 23:37:01.645915 containerd[2052]: time="2026-01-13T23:37:01.645876368Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 13 23:37:01.645915 containerd[2052]: time="2026-01-13T23:37:01.645900248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 13 23:37:01.646045 containerd[2052]: time="2026-01-13T23:37:01.646033352Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 13 23:37:01.646714 containerd[2052]: time="2026-01-13T23:37:01.646484440Z" level=info msg="Start snapshots syncer" Jan 13 23:37:01.646714 containerd[2052]: time="2026-01-13T23:37:01.646522392Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 13 23:37:01.647588 containerd[2052]: time="2026-01-13T23:37:01.647488280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 13 23:37:01.647588 containerd[2052]: time="2026-01-13T23:37:01.647548240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 13 23:37:01.647768 containerd[2052]: time="2026-01-13T23:37:01.647751232Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648877768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648905560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648921824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648928816Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648940648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648952376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648959168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648967616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.648974960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.649016856Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.649029848Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.649035240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.649042224Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:37:01.649293 containerd[2052]: time="2026-01-13T23:37:01.649046864Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649054080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649069864Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649084656Z" level=info msg="runtime interface created" Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649088664Z" level=info msg="created NRI interface" Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649093480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649102504Z" level=info msg="Connect containerd service" Jan 13 23:37:01.649648 containerd[2052]: time="2026-01-13T23:37:01.649118528Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 23:37:01.650838 containerd[2052]: time="2026-01-13T23:37:01.650807288Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:37:01.652098 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 23:37:01.658073 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 23:37:01.664657 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 23:37:01.673161 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 13 23:37:01.690318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 23:37:01.696598 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 23:37:01.702573 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 23:37:01.709067 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 23:37:01.975102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:02.017145 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072611408Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072675128Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072701136Z" level=info msg="Start subscribing containerd event" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072734224Z" level=info msg="Start recovering state" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072807344Z" level=info msg="Start event monitor" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072818024Z" level=info msg="Start cni network conf syncer for default" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072822920Z" level=info msg="Start streaming server" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072828848Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072833440Z" level=info msg="runtime interface starting up..." Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072837104Z" level=info msg="starting plugins..." Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.072847904Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 13 23:37:02.078076 containerd[2052]: time="2026-01-13T23:37:02.074015592Z" level=info msg="containerd successfully booted in 0.480689s" Jan 13 23:37:02.073145 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 23:37:02.079155 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 23:37:02.087599 systemd[1]: Startup finished in 3.059s (kernel) + 13.761s (initrd) + 13.032s (userspace) = 29.853s. Jan 13 23:37:02.371384 kubelet[2224]: E0113 23:37:02.371247 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:37:02.375005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:37:02.375127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:37:02.375725 systemd[1]: kubelet.service: Consumed 558ms CPU time, 256.7M memory peak. Jan 13 23:37:02.708816 login[2213]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:02.708817 login[2212]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:02.717412 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 23:37:02.719527 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 23:37:02.721275 systemd-logind[2030]: New session 1 of user core. Jan 13 23:37:02.725793 systemd-logind[2030]: New session 2 of user core. Jan 13 23:37:02.748894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 23:37:02.752445 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 23:37:02.769571 (systemd)[2239]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:02.771818 systemd-logind[2030]: New session 3 of user core. Jan 13 23:37:02.910871 systemd[2239]: Queued start job for default target default.target. Jan 13 23:37:02.916068 systemd[2239]: Created slice app.slice - User Application Slice. Jan 13 23:37:02.916094 systemd[2239]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 13 23:37:02.916103 systemd[2239]: Reached target paths.target - Paths. Jan 13 23:37:02.916147 systemd[2239]: Reached target timers.target - Timers. Jan 13 23:37:02.917832 systemd[2239]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 23:37:02.920481 systemd[2239]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 13 23:37:02.926365 systemd[2239]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 23:37:02.926511 systemd[2239]: Reached target sockets.target - Sockets. Jan 13 23:37:02.930363 systemd[2239]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 13 23:37:02.930446 systemd[2239]: Reached target basic.target - Basic System. Jan 13 23:37:02.930493 systemd[2239]: Reached target default.target - Main User Target. Jan 13 23:37:02.930513 systemd[2239]: Startup finished in 154ms. Jan 13 23:37:02.930609 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 23:37:02.936541 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 23:37:02.937139 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 23:37:03.215458 waagent[2208]: 2026-01-13T23:37:03.215382Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 13 23:37:03.220353 waagent[2208]: 2026-01-13T23:37:03.220290Z INFO Daemon Daemon OS: flatcar 4578.0.0 Jan 13 23:37:03.223923 waagent[2208]: 2026-01-13T23:37:03.223879Z INFO Daemon Daemon Python: 3.12.11 Jan 13 23:37:03.229438 waagent[2208]: 2026-01-13T23:37:03.229390Z INFO Daemon Daemon Run daemon Jan 13 23:37:03.233029 waagent[2208]: 2026-01-13T23:37:03.232987Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4578.0.0' Jan 13 23:37:03.240361 waagent[2208]: 2026-01-13T23:37:03.240307Z INFO Daemon Daemon Using waagent for provisioning Jan 13 23:37:03.244272 waagent[2208]: 2026-01-13T23:37:03.244233Z INFO Daemon Daemon Activate resource disk Jan 13 23:37:03.247668 waagent[2208]: 2026-01-13T23:37:03.247632Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 13 23:37:03.256656 waagent[2208]: 2026-01-13T23:37:03.256614Z INFO Daemon Daemon Found device: None Jan 13 23:37:03.260096 waagent[2208]: 2026-01-13T23:37:03.260057Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 13 23:37:03.266484 waagent[2208]: 2026-01-13T23:37:03.266443Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 13 23:37:03.274755 waagent[2208]: 2026-01-13T23:37:03.274714Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 13 23:37:03.279007 waagent[2208]: 2026-01-13T23:37:03.278971Z INFO Daemon Daemon Running default provisioning handler Jan 13 23:37:03.288890 waagent[2208]: 2026-01-13T23:37:03.288843Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 13 23:37:03.298987 waagent[2208]: 2026-01-13T23:37:03.298939Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 13 23:37:03.305983 waagent[2208]: 2026-01-13T23:37:03.305934Z INFO Daemon Daemon cloud-init is enabled: False Jan 13 23:37:03.310348 waagent[2208]: 2026-01-13T23:37:03.309617Z INFO Daemon Daemon Copying ovf-env.xml Jan 13 23:37:03.405103 waagent[2208]: 2026-01-13T23:37:03.405024Z INFO Daemon Daemon Successfully mounted dvd Jan 13 23:37:03.433469 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 13 23:37:03.436205 waagent[2208]: 2026-01-13T23:37:03.436142Z INFO Daemon Daemon Detect protocol endpoint Jan 13 23:37:03.440225 waagent[2208]: 2026-01-13T23:37:03.440183Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 13 23:37:03.444341 waagent[2208]: 2026-01-13T23:37:03.444301Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 13 23:37:03.449100 waagent[2208]: 2026-01-13T23:37:03.449070Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 13 23:37:03.452919 waagent[2208]: 2026-01-13T23:37:03.452885Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 13 23:37:03.456742 waagent[2208]: 2026-01-13T23:37:03.456714Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 13 23:37:03.534248 waagent[2208]: 2026-01-13T23:37:03.534148Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 13 23:37:03.539027 waagent[2208]: 2026-01-13T23:37:03.539004Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 13 23:37:03.542785 waagent[2208]: 2026-01-13T23:37:03.542754Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 13 23:37:03.633431 waagent[2208]: 2026-01-13T23:37:03.633356Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 13 23:37:03.638273 waagent[2208]: 2026-01-13T23:37:03.638233Z INFO Daemon Daemon Forcing an update of the goal state. Jan 13 23:37:03.647812 waagent[2208]: 2026-01-13T23:37:03.647771Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 13 23:37:03.664923 waagent[2208]: 2026-01-13T23:37:03.664881Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 13 23:37:03.669205 waagent[2208]: 2026-01-13T23:37:03.669171Z INFO Daemon Jan 13 23:37:03.671300 waagent[2208]: 2026-01-13T23:37:03.671269Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c6dccf96-3330-4858-803b-ff26d05af6e8 eTag: 8578293317137339864 source: Fabric] Jan 13 23:37:03.679690 waagent[2208]: 2026-01-13T23:37:03.679656Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 13 23:37:03.684553 waagent[2208]: 2026-01-13T23:37:03.684522Z INFO Daemon Jan 13 23:37:03.686625 waagent[2208]: 2026-01-13T23:37:03.686595Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 13 23:37:03.696401 waagent[2208]: 2026-01-13T23:37:03.696371Z INFO Daemon Daemon Downloading artifacts profile blob Jan 13 23:37:03.754804 waagent[2208]: 2026-01-13T23:37:03.754730Z INFO Daemon Downloaded certificate {'thumbprint': '8F1E1B2E81D02FB50E4EA7A28D855E5F2AF69A4C', 'hasPrivateKey': True} Jan 13 23:37:03.762297 waagent[2208]: 2026-01-13T23:37:03.762255Z INFO Daemon Fetch goal state completed Jan 13 23:37:03.772225 waagent[2208]: 2026-01-13T23:37:03.772192Z INFO Daemon Daemon Starting provisioning Jan 13 23:37:03.776256 waagent[2208]: 2026-01-13T23:37:03.776220Z INFO Daemon Daemon Handle ovf-env.xml. Jan 13 23:37:03.779739 waagent[2208]: 2026-01-13T23:37:03.779712Z INFO Daemon Daemon Set hostname [ci-4578.0.0-p-c34b1ae5c8] Jan 13 23:37:03.800449 waagent[2208]: 2026-01-13T23:37:03.800403Z INFO Daemon Daemon Publish hostname [ci-4578.0.0-p-c34b1ae5c8] Jan 13 23:37:03.805229 waagent[2208]: 2026-01-13T23:37:03.805188Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 13 23:37:03.809885 waagent[2208]: 2026-01-13T23:37:03.809851Z INFO Daemon Daemon Primary interface is [eth0] Jan 13 23:37:03.819702 systemd-networkd[1637]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:37:03.819714 systemd-networkd[1637]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:37:03.819829 systemd-networkd[1637]: eth0: DHCP lease lost Jan 13 23:37:03.835393 waagent[2208]: 2026-01-13T23:37:03.835313Z INFO Daemon Daemon Create user account if not exists Jan 13 23:37:03.839607 waagent[2208]: 2026-01-13T23:37:03.839558Z INFO Daemon Daemon User core already exists, skip useradd Jan 13 23:37:03.843949 waagent[2208]: 2026-01-13T23:37:03.843905Z INFO Daemon Daemon Configure sudoer Jan 13 23:37:03.845374 systemd-networkd[1637]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 13 23:37:03.852320 waagent[2208]: 2026-01-13T23:37:03.852268Z INFO Daemon Daemon Configure sshd Jan 13 23:37:03.858913 waagent[2208]: 2026-01-13T23:37:03.858867Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 13 23:37:03.868430 waagent[2208]: 2026-01-13T23:37:03.868392Z INFO Daemon Daemon Deploy ssh public key. Jan 13 23:37:04.973496 waagent[2208]: 2026-01-13T23:37:04.973447Z INFO Daemon Daemon Provisioning complete Jan 13 23:37:04.987440 waagent[2208]: 2026-01-13T23:37:04.987400Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 13 23:37:04.992325 waagent[2208]: 2026-01-13T23:37:04.992290Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 13 23:37:04.999908 waagent[2208]: 2026-01-13T23:37:04.999777Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 13 23:37:05.113027 waagent[2292]: 2026-01-13T23:37:05.112523Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 13 23:37:05.113027 waagent[2292]: 2026-01-13T23:37:05.112669Z INFO ExtHandler ExtHandler OS: flatcar 4578.0.0 Jan 13 23:37:05.113027 waagent[2292]: 2026-01-13T23:37:05.112715Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 13 23:37:05.113027 waagent[2292]: 2026-01-13T23:37:05.112750Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 13 23:37:05.179536 waagent[2292]: 2026-01-13T23:37:05.179469Z INFO ExtHandler ExtHandler Distro: flatcar-4578.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 13 23:37:05.179882 waagent[2292]: 2026-01-13T23:37:05.179850Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 13 23:37:05.180002 waagent[2292]: 2026-01-13T23:37:05.179981Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 13 23:37:05.186669 waagent[2292]: 2026-01-13T23:37:05.186619Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 13 23:37:05.195579 waagent[2292]: 2026-01-13T23:37:05.195534Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 13 23:37:05.196168 waagent[2292]: 2026-01-13T23:37:05.196130Z INFO ExtHandler Jan 13 23:37:05.196367 waagent[2292]: 2026-01-13T23:37:05.196308Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 38ffbdcc-018d-4e06-a40d-38a9e1942690 eTag: 8578293317137339864 source: Fabric] Jan 13 23:37:05.196715 waagent[2292]: 2026-01-13T23:37:05.196680Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 13 23:37:05.197280 waagent[2292]: 2026-01-13T23:37:05.197242Z INFO ExtHandler Jan 13 23:37:05.197456 waagent[2292]: 2026-01-13T23:37:05.197426Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 13 23:37:05.201317 waagent[2292]: 2026-01-13T23:37:05.201284Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 13 23:37:05.258458 waagent[2292]: 2026-01-13T23:37:05.258316Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F1E1B2E81D02FB50E4EA7A28D855E5F2AF69A4C', 'hasPrivateKey': True} Jan 13 23:37:05.259024 waagent[2292]: 2026-01-13T23:37:05.258983Z INFO ExtHandler Fetch goal state completed Jan 13 23:37:05.271133 waagent[2292]: 2026-01-13T23:37:05.271091Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 13 23:37:05.275020 waagent[2292]: 2026-01-13T23:37:05.274971Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2292 Jan 13 23:37:05.275240 waagent[2292]: 2026-01-13T23:37:05.275209Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 13 23:37:05.275652 waagent[2292]: 2026-01-13T23:37:05.275617Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 13 23:37:05.276972 waagent[2292]: 2026-01-13T23:37:05.276931Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 13 23:37:05.277433 waagent[2292]: 2026-01-13T23:37:05.277395Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 13 23:37:05.277651 waagent[2292]: 2026-01-13T23:37:05.277620Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 13 23:37:05.278197 waagent[2292]: 2026-01-13T23:37:05.278160Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 13 23:37:05.315142 waagent[2292]: 2026-01-13T23:37:05.315095Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 13 23:37:05.315344 waagent[2292]: 2026-01-13T23:37:05.315310Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 13 23:37:05.320933 waagent[2292]: 2026-01-13T23:37:05.320565Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 13 23:37:05.325731 systemd[1]: Reload requested from client PID 2307 ('systemctl') (unit waagent.service)... Jan 13 23:37:05.325958 systemd[1]: Reloading... Jan 13 23:37:05.413378 zram_generator::config[2355]: No configuration found. Jan 13 23:37:05.564696 systemd[1]: Reloading finished in 238 ms. Jan 13 23:37:05.589604 waagent[2292]: 2026-01-13T23:37:05.588578Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 13 23:37:05.589604 waagent[2292]: 2026-01-13T23:37:05.588721Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 13 23:37:06.539380 waagent[2292]: 2026-01-13T23:37:06.539246Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 13 23:37:06.539706 waagent[2292]: 2026-01-13T23:37:06.539603Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 13 23:37:06.540322 waagent[2292]: 2026-01-13T23:37:06.540276Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 13 23:37:06.540645 waagent[2292]: 2026-01-13T23:37:06.540564Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 13 23:37:06.541365 waagent[2292]: 2026-01-13T23:37:06.540824Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 13 23:37:06.541365 waagent[2292]: 2026-01-13T23:37:06.540906Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 13 23:37:06.541365 waagent[2292]: 2026-01-13T23:37:06.541088Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 13 23:37:06.541365 waagent[2292]: 2026-01-13T23:37:06.541236Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 13 23:37:06.541365 waagent[2292]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 13 23:37:06.541365 waagent[2292]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 13 23:37:06.541365 waagent[2292]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 13 23:37:06.541365 waagent[2292]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 13 23:37:06.541365 waagent[2292]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 13 23:37:06.541365 waagent[2292]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 13 23:37:06.541642 waagent[2292]: 2026-01-13T23:37:06.541609Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 13 23:37:06.541906 waagent[2292]: 2026-01-13T23:37:06.541773Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 13 23:37:06.542048 waagent[2292]: 2026-01-13T23:37:06.542009Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 13 23:37:06.542184 waagent[2292]: 2026-01-13T23:37:06.542158Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 13 23:37:06.542278 waagent[2292]: 2026-01-13T23:37:06.542249Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 13 23:37:06.542424 waagent[2292]: 2026-01-13T23:37:06.542382Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 13 23:37:06.542642 waagent[2292]: 2026-01-13T23:37:06.542611Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 13 23:37:06.543227 waagent[2292]: 2026-01-13T23:37:06.543190Z INFO EnvHandler ExtHandler Configure routes Jan 13 23:37:06.543910 waagent[2292]: 2026-01-13T23:37:06.543874Z INFO EnvHandler ExtHandler Gateway:None Jan 13 23:37:06.544223 waagent[2292]: 2026-01-13T23:37:06.544192Z INFO EnvHandler ExtHandler Routes:None Jan 13 23:37:06.549135 waagent[2292]: 2026-01-13T23:37:06.549075Z INFO ExtHandler ExtHandler Jan 13 23:37:06.549436 waagent[2292]: 2026-01-13T23:37:06.549389Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f2e96d73-ca3c-444f-b39b-f8d12345138b correlation 40bc0c70-28fd-4924-8a4b-fb0cc8fd3ce1 created: 2026-01-13T23:36:10.020119Z] Jan 13 23:37:06.550375 waagent[2292]: 2026-01-13T23:37:06.549767Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 13 23:37:06.550375 waagent[2292]: 2026-01-13T23:37:06.550181Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 13 23:37:06.589361 waagent[2292]: 2026-01-13T23:37:06.588713Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 13 23:37:06.589361 waagent[2292]: Try `iptables -h' or 'iptables --help' for more information.) Jan 13 23:37:06.589361 waagent[2292]: 2026-01-13T23:37:06.589123Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5D96FBBF-02E6-4E71-B896-CF7BD5938BC2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 13 23:37:06.635882 waagent[2292]: 2026-01-13T23:37:06.635812Z INFO MonitorHandler ExtHandler Network interfaces: Jan 13 23:37:06.635882 waagent[2292]: Executing ['ip', '-a', '-o', 'link']: Jan 13 23:37:06.635882 waagent[2292]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 13 23:37:06.635882 waagent[2292]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d7:1a brd ff:ff:ff:ff:ff:ff\ altname enx002248bbd71a Jan 13 23:37:06.635882 waagent[2292]: 3: enP44730s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d7:1a brd ff:ff:ff:ff:ff:ff\ altname enP44730p0s2 Jan 13 23:37:06.635882 waagent[2292]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 13 23:37:06.635882 waagent[2292]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 13 23:37:06.635882 waagent[2292]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 13 23:37:06.635882 waagent[2292]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 13 23:37:06.635882 waagent[2292]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 13 23:37:06.635882 waagent[2292]: 2: eth0 inet6 fe80::222:48ff:febb:d71a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 13 23:37:06.681020 waagent[2292]: 2026-01-13T23:37:06.680952Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 13 23:37:06.681020 waagent[2292]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.681020 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.681020 waagent[2292]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.681020 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.681020 waagent[2292]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.681020 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.681020 waagent[2292]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 13 23:37:06.681020 waagent[2292]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 13 23:37:06.681020 waagent[2292]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 13 23:37:06.683497 waagent[2292]: 2026-01-13T23:37:06.683449Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 13 23:37:06.683497 waagent[2292]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.683497 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.683497 waagent[2292]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.683497 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.683497 waagent[2292]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 13 23:37:06.683497 waagent[2292]: pkts bytes target prot opt in out source destination Jan 13 23:37:06.683497 waagent[2292]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 13 23:37:06.683497 waagent[2292]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 13 23:37:06.683497 waagent[2292]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 13 23:37:06.683714 waagent[2292]: 2026-01-13T23:37:06.683684Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 13 23:37:12.423655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 23:37:12.425476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:12.691665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:12.695420 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:37:12.721550 kubelet[2444]: E0113 23:37:12.721493 2444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:37:12.724494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:37:12.724609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:37:12.725153 systemd[1]: kubelet.service: Consumed 113ms CPU time, 104.6M memory peak. Jan 13 23:37:22.924097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 23:37:22.925536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:23.292746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:23.296595 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:37:23.328149 kubelet[2458]: E0113 23:37:23.328092 2458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:37:23.330271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:37:23.330407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:37:23.330767 systemd[1]: kubelet.service: Consumed 109ms CPU time, 105.6M memory peak. Jan 13 23:37:24.630772 chronyd[2003]: Selected source PHC0 Jan 13 23:37:26.452765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 23:37:26.453792 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:49664.service - OpenSSH per-connection server daemon (10.200.16.10:49664). Jan 13 23:37:27.078237 sshd[2466]: Accepted publickey for core from 10.200.16.10 port 49664 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:37:27.079052 sshd-session[2466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:27.082650 systemd-logind[2030]: New session 4 of user core. Jan 13 23:37:27.088472 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 23:37:27.398214 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:49676.service - OpenSSH per-connection server daemon (10.200.16.10:49676). Jan 13 23:37:27.816750 sshd[2473]: Accepted publickey for core from 10.200.16.10 port 49676 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:37:27.817902 sshd-session[2473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:27.822166 systemd-logind[2030]: New session 5 of user core. Jan 13 23:37:27.828481 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 23:37:28.051632 sshd[2477]: Connection closed by 10.200.16.10 port 49676 Jan 13 23:37:28.052171 sshd-session[2473]: pam_unix(sshd:session): session closed for user core Jan 13 23:37:28.055509 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:49676.service: Deactivated successfully. Jan 13 23:37:28.056993 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 23:37:28.057635 systemd-logind[2030]: Session 5 logged out. Waiting for processes to exit. Jan 13 23:37:28.058799 systemd-logind[2030]: Removed session 5. Jan 13 23:37:28.139188 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:49682.service - OpenSSH per-connection server daemon (10.200.16.10:49682). Jan 13 23:37:28.530423 sshd[2483]: Accepted publickey for core from 10.200.16.10 port 49682 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:37:28.531551 sshd-session[2483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:28.536338 systemd-logind[2030]: New session 6 of user core. Jan 13 23:37:28.541519 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 23:37:28.745590 sshd[2487]: Connection closed by 10.200.16.10 port 49682 Jan 13 23:37:28.746137 sshd-session[2483]: pam_unix(sshd:session): session closed for user core Jan 13 23:37:28.750216 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:49682.service: Deactivated successfully. Jan 13 23:37:28.751755 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 23:37:28.752316 systemd-logind[2030]: Session 6 logged out. Waiting for processes to exit. Jan 13 23:37:28.753472 systemd-logind[2030]: Removed session 6. Jan 13 23:37:28.846171 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:49690.service - OpenSSH per-connection server daemon (10.200.16.10:49690). Jan 13 23:37:29.266292 sshd[2493]: Accepted publickey for core from 10.200.16.10 port 49690 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:37:29.267459 sshd-session[2493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:29.271258 systemd-logind[2030]: New session 7 of user core. Jan 13 23:37:29.278692 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 23:37:29.501959 sshd[2497]: Connection closed by 10.200.16.10 port 49690 Jan 13 23:37:29.502523 sshd-session[2493]: pam_unix(sshd:session): session closed for user core Jan 13 23:37:29.506826 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:49690.service: Deactivated successfully. Jan 13 23:37:29.508567 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 23:37:29.509983 systemd-logind[2030]: Session 7 logged out. Waiting for processes to exit. Jan 13 23:37:29.510915 systemd-logind[2030]: Removed session 7. Jan 13 23:37:29.589584 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:58928.service - OpenSSH per-connection server daemon (10.200.16.10:58928). Jan 13 23:37:29.975698 sshd[2503]: Accepted publickey for core from 10.200.16.10 port 58928 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:37:29.976840 sshd-session[2503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:37:29.981067 systemd-logind[2030]: New session 8 of user core. Jan 13 23:37:29.987483 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 23:37:30.225204 sudo[2508]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 23:37:30.225494 sudo[2508]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:37:31.393904 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 23:37:31.403582 (dockerd)[2526]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 23:37:32.413365 dockerd[2526]: time="2026-01-13T23:37:32.412741636Z" level=info msg="Starting up" Jan 13 23:37:32.414926 dockerd[2526]: time="2026-01-13T23:37:32.414894572Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 13 23:37:32.423294 dockerd[2526]: time="2026-01-13T23:37:32.423185532Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 13 23:37:32.452455 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3818948565-merged.mount: Deactivated successfully. Jan 13 23:37:32.542192 dockerd[2526]: time="2026-01-13T23:37:32.542140020Z" level=info msg="Loading containers: start." Jan 13 23:37:32.587372 kernel: Initializing XFRM netlink socket Jan 13 23:37:32.860035 systemd-networkd[1637]: docker0: Link UP Jan 13 23:37:32.875884 dockerd[2526]: time="2026-01-13T23:37:32.875836108Z" level=info msg="Loading containers: done." Jan 13 23:37:32.894750 dockerd[2526]: time="2026-01-13T23:37:32.894702052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 23:37:32.894914 dockerd[2526]: time="2026-01-13T23:37:32.894786559Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 13 23:37:32.894914 dockerd[2526]: time="2026-01-13T23:37:32.894884622Z" level=info msg="Initializing buildkit" Jan 13 23:37:32.938718 dockerd[2526]: time="2026-01-13T23:37:32.938669967Z" level=info msg="Completed buildkit initialization" Jan 13 23:37:32.943685 dockerd[2526]: time="2026-01-13T23:37:32.943635142Z" level=info msg="Daemon has completed initialization" Jan 13 23:37:32.944469 dockerd[2526]: time="2026-01-13T23:37:32.943919896Z" level=info msg="API listen on /run/docker.sock" Jan 13 23:37:32.944391 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 23:37:33.423506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 23:37:33.425068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:33.451072 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2434086279-merged.mount: Deactivated successfully. Jan 13 23:37:33.530696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:33.538551 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:37:33.640312 kubelet[2739]: E0113 23:37:33.640235 2739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:37:33.642063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:37:33.642173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:37:33.642746 systemd[1]: kubelet.service: Consumed 108ms CPU time, 105.2M memory peak. Jan 13 23:37:33.704115 containerd[2052]: time="2026-01-13T23:37:33.704007107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 13 23:37:34.910484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219648243.mount: Deactivated successfully. Jan 13 23:37:35.869300 containerd[2052]: time="2026-01-13T23:37:35.869242161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:35.873261 containerd[2052]: time="2026-01-13T23:37:35.873220695Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845980" Jan 13 23:37:35.876664 containerd[2052]: time="2026-01-13T23:37:35.876633570Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:35.881193 containerd[2052]: time="2026-01-13T23:37:35.881157969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:35.882228 containerd[2052]: time="2026-01-13T23:37:35.882199665Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.178153269s" Jan 13 23:37:35.882251 containerd[2052]: time="2026-01-13T23:37:35.882244259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 13 23:37:35.882991 containerd[2052]: time="2026-01-13T23:37:35.882972194Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 13 23:37:36.702103 waagent[2292]: 2026-01-13T23:37:36.701465Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 13 23:37:36.711529 waagent[2292]: 2026-01-13T23:37:36.711491Z INFO ExtHandler Jan 13 23:37:36.711761 waagent[2292]: 2026-01-13T23:37:36.711735Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d4c56026-8fa8-4b4e-a863-cb85055f7ace eTag: 16149077043800715004 source: Fabric] Jan 13 23:37:36.712255 waagent[2292]: 2026-01-13T23:37:36.712208Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 13 23:37:36.713072 waagent[2292]: 2026-01-13T23:37:36.713036Z INFO ExtHandler Jan 13 23:37:36.713305 waagent[2292]: 2026-01-13T23:37:36.713201Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 13 23:37:36.768520 waagent[2292]: 2026-01-13T23:37:36.768468Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 13 23:37:36.821751 waagent[2292]: 2026-01-13T23:37:36.821678Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F1E1B2E81D02FB50E4EA7A28D855E5F2AF69A4C', 'hasPrivateKey': True} Jan 13 23:37:36.822400 waagent[2292]: 2026-01-13T23:37:36.822328Z INFO ExtHandler Fetch goal state completed Jan 13 23:37:36.822877 waagent[2292]: 2026-01-13T23:37:36.822844Z INFO ExtHandler ExtHandler Jan 13 23:37:36.823046 waagent[2292]: 2026-01-13T23:37:36.822996Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c66a33ea-6ceb-4b93-8845-542954c4e5dd correlation 40bc0c70-28fd-4924-8a4b-fb0cc8fd3ce1 created: 2026-01-13T23:37:30.364189Z] Jan 13 23:37:36.823471 waagent[2292]: 2026-01-13T23:37:36.823439Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 13 23:37:36.823973 waagent[2292]: 2026-01-13T23:37:36.823942Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 13 23:37:37.407130 containerd[2052]: time="2026-01-13T23:37:37.407073840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:37.410894 containerd[2052]: time="2026-01-13T23:37:37.410861047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 13 23:37:37.414157 containerd[2052]: time="2026-01-13T23:37:37.414127062Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:37.419304 containerd[2052]: time="2026-01-13T23:37:37.419271512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:37.419831 containerd[2052]: time="2026-01-13T23:37:37.419695221Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.536696747s" Jan 13 23:37:37.419831 containerd[2052]: time="2026-01-13T23:37:37.419722830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 13 23:37:37.420726 containerd[2052]: time="2026-01-13T23:37:37.420709045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 13 23:37:38.698369 containerd[2052]: time="2026-01-13T23:37:38.697855786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:38.701365 containerd[2052]: time="2026-01-13T23:37:38.701322247Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 13 23:37:38.704639 containerd[2052]: time="2026-01-13T23:37:38.704617711Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:38.708623 containerd[2052]: time="2026-01-13T23:37:38.708583884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:38.709303 containerd[2052]: time="2026-01-13T23:37:38.709153094Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.288385183s" Jan 13 23:37:38.709303 containerd[2052]: time="2026-01-13T23:37:38.709185823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 13 23:37:38.709673 containerd[2052]: time="2026-01-13T23:37:38.709648070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 13 23:37:40.290834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540165738.mount: Deactivated successfully. Jan 13 23:37:40.546566 containerd[2052]: time="2026-01-13T23:37:40.546442540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:40.549382 containerd[2052]: time="2026-01-13T23:37:40.549342535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27556362" Jan 13 23:37:40.552718 containerd[2052]: time="2026-01-13T23:37:40.552679601Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:40.556418 containerd[2052]: time="2026-01-13T23:37:40.556375373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:40.556914 containerd[2052]: time="2026-01-13T23:37:40.556646086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.846911829s" Jan 13 23:37:40.556914 containerd[2052]: time="2026-01-13T23:37:40.556677431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 13 23:37:40.557112 containerd[2052]: time="2026-01-13T23:37:40.557087619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 13 23:37:41.252586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290966360.mount: Deactivated successfully. Jan 13 23:37:41.997384 containerd[2052]: time="2026-01-13T23:37:41.996933933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:41.999837 containerd[2052]: time="2026-01-13T23:37:41.999755390Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 13 23:37:42.002973 containerd[2052]: time="2026-01-13T23:37:42.002912031Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:42.007141 containerd[2052]: time="2026-01-13T23:37:42.007096403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:42.007743 containerd[2052]: time="2026-01-13T23:37:42.007714625Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.450598348s" Jan 13 23:37:42.007743 containerd[2052]: time="2026-01-13T23:37:42.007744994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 13 23:37:42.008366 containerd[2052]: time="2026-01-13T23:37:42.008224337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 23:37:42.330530 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 13 23:37:42.555225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920019549.mount: Deactivated successfully. Jan 13 23:37:42.576833 containerd[2052]: time="2026-01-13T23:37:42.576308951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:37:42.580278 containerd[2052]: time="2026-01-13T23:37:42.580226341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=886" Jan 13 23:37:42.583557 containerd[2052]: time="2026-01-13T23:37:42.583524966Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:37:42.588751 containerd[2052]: time="2026-01-13T23:37:42.588714986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:37:42.589230 containerd[2052]: time="2026-01-13T23:37:42.589115989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.865955ms" Jan 13 23:37:42.589426 containerd[2052]: time="2026-01-13T23:37:42.589311959Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 23:37:42.590195 containerd[2052]: time="2026-01-13T23:37:42.590165960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 13 23:37:43.563259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711107200.mount: Deactivated successfully. Jan 13 23:37:43.673756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 23:37:43.677535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:44.891811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:44.897554 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:37:44.925224 kubelet[2902]: E0113 23:37:44.925145 2902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:37:44.926947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:37:44.927067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:37:44.927462 systemd[1]: kubelet.service: Consumed 114ms CPU time, 105M memory peak. Jan 13 23:37:45.853096 containerd[2052]: time="2026-01-13T23:37:45.853047342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:45.921106 containerd[2052]: time="2026-01-13T23:37:45.921043345Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Jan 13 23:37:45.924807 containerd[2052]: time="2026-01-13T23:37:45.924769942Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:45.930126 containerd[2052]: time="2026-01-13T23:37:45.929883618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:37:45.930799 containerd[2052]: time="2026-01-13T23:37:45.930765240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.340568135s" Jan 13 23:37:45.930799 containerd[2052]: time="2026-01-13T23:37:45.930795793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 13 23:37:46.248095 update_engine[2033]: I20260113 23:37:46.248018 2033 update_attempter.cc:509] Updating boot flags... Jan 13 23:37:48.816155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:48.816484 systemd[1]: kubelet.service: Consumed 114ms CPU time, 105M memory peak. Jan 13 23:37:48.819528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:48.839448 systemd[1]: Reload requested from client PID 3041 ('systemctl') (unit session-8.scope)... Jan 13 23:37:48.839458 systemd[1]: Reloading... Jan 13 23:37:48.929465 zram_generator::config[3093]: No configuration found. Jan 13 23:37:49.091540 systemd[1]: Reloading finished in 251 ms. Jan 13 23:37:49.126158 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 23:37:49.126236 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 23:37:49.126666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:49.128142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:49.288574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:49.301583 (kubelet)[3154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:37:49.328322 kubelet[3154]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:37:49.328322 kubelet[3154]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:37:49.328322 kubelet[3154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:37:49.328701 kubelet[3154]: I0113 23:37:49.328406 3154 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:37:49.770699 kubelet[3154]: I0113 23:37:49.770658 3154 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:37:49.770699 kubelet[3154]: I0113 23:37:49.770690 3154 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:37:49.770926 kubelet[3154]: I0113 23:37:49.770908 3154 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:37:49.791723 kubelet[3154]: E0113 23:37:49.791683 3154 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:37:49.793155 kubelet[3154]: I0113 23:37:49.792975 3154 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:37:49.798079 kubelet[3154]: I0113 23:37:49.797925 3154 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:37:49.801480 kubelet[3154]: I0113 23:37:49.801456 3154 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:37:49.802185 kubelet[3154]: I0113 23:37:49.802145 3154 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:37:49.802371 kubelet[3154]: I0113 23:37:49.802179 3154 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-c34b1ae5c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:37:49.802466 kubelet[3154]: I0113 23:37:49.802379 3154 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:37:49.802466 kubelet[3154]: I0113 23:37:49.802387 3154 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:37:49.803364 kubelet[3154]: I0113 23:37:49.802509 3154 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:37:49.805925 kubelet[3154]: I0113 23:37:49.805710 3154 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:37:49.805925 kubelet[3154]: I0113 23:37:49.805735 3154 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:37:49.805925 kubelet[3154]: I0113 23:37:49.805754 3154 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:37:49.805925 kubelet[3154]: I0113 23:37:49.805765 3154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:37:49.808424 kubelet[3154]: W0113 23:37:49.808393 3154 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Jan 13 23:37:49.808529 kubelet[3154]: E0113 23:37:49.808514 3154 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:37:49.808649 kubelet[3154]: W0113 23:37:49.808628 3154 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-c34b1ae5c8&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Jan 13 23:37:49.808726 kubelet[3154]: E0113 23:37:49.808713 3154 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-c34b1ae5c8&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:37:49.808822 kubelet[3154]: I0113 23:37:49.808810 3154 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:37:49.809167 kubelet[3154]: I0113 23:37:49.809153 3154 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:37:49.809263 kubelet[3154]: W0113 23:37:49.809255 3154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 23:37:49.810359 kubelet[3154]: I0113 23:37:49.810274 3154 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:37:49.810359 kubelet[3154]: I0113 23:37:49.810300 3154 server.go:1287] "Started kubelet" Jan 13 23:37:49.813773 kubelet[3154]: I0113 23:37:49.813617 3154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:37:49.814948 kubelet[3154]: E0113 23:37:49.814863 3154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578.0.0-p-c34b1ae5c8.188a6ea79980bf0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-c34b1ae5c8,UID:ci-4578.0.0-p-c34b1ae5c8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-c34b1ae5c8,},FirstTimestamp:2026-01-13 23:37:49.810286348 +0000 UTC m=+0.506182545,LastTimestamp:2026-01-13 23:37:49.810286348 +0000 UTC m=+0.506182545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-c34b1ae5c8,}" Jan 13 23:37:49.816187 kubelet[3154]: I0113 23:37:49.816157 3154 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:37:49.817355 kubelet[3154]: I0113 23:37:49.816843 3154 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:37:49.818074 kubelet[3154]: I0113 23:37:49.818029 3154 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:37:49.818347 kubelet[3154]: I0113 23:37:49.818316 3154 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:37:49.818631 kubelet[3154]: I0113 23:37:49.818612 3154 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:37:49.818953 kubelet[3154]: I0113 23:37:49.818925 3154 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:37:49.819464 kubelet[3154]: E0113 23:37:49.819439 3154 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" Jan 13 23:37:49.820080 kubelet[3154]: E0113 23:37:49.820049 3154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c34b1ae5c8?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Jan 13 23:37:49.820496 kubelet[3154]: I0113 23:37:49.820479 3154 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:37:49.820676 kubelet[3154]: I0113 23:37:49.820660 3154 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:37:49.821603 kubelet[3154]: I0113 23:37:49.821589 3154 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:37:49.821810 kubelet[3154]: I0113 23:37:49.821784 3154 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:37:49.821856 kubelet[3154]: I0113 23:37:49.821834 3154 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:37:49.825048 kubelet[3154]: E0113 23:37:49.825027 3154 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:37:49.829647 kubelet[3154]: I0113 23:37:49.829603 3154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:37:49.830423 kubelet[3154]: I0113 23:37:49.830397 3154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:37:49.830423 kubelet[3154]: I0113 23:37:49.830415 3154 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:37:49.830423 kubelet[3154]: I0113 23:37:49.830428 3154 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:37:49.830524 kubelet[3154]: I0113 23:37:49.830433 3154 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:37:49.830524 kubelet[3154]: E0113 23:37:49.830464 3154 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:37:49.835796 kubelet[3154]: W0113 23:37:49.835745 3154 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Jan 13 23:37:49.835796 kubelet[3154]: E0113 23:37:49.835783 3154 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:37:49.835922 kubelet[3154]: W0113 23:37:49.835835 3154 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Jan 13 23:37:49.835922 kubelet[3154]: E0113 23:37:49.835852 3154 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:37:49.851386 kubelet[3154]: I0113 23:37:49.851315 3154 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:37:49.851386 kubelet[3154]: I0113 23:37:49.851348 3154 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:37:49.851386 kubelet[3154]: I0113 23:37:49.851366 3154 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:37:49.859838 kubelet[3154]: I0113 23:37:49.859814 3154 policy_none.go:49] "None policy: Start" Jan 13 23:37:49.859838 kubelet[3154]: I0113 23:37:49.859841 3154 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:37:49.859906 kubelet[3154]: I0113 23:37:49.859853 3154 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:37:49.867580 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 23:37:49.877453 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 23:37:49.880199 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 23:37:49.890019 kubelet[3154]: I0113 23:37:49.889994 3154 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:37:49.891117 kubelet[3154]: I0113 23:37:49.891103 3154 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:37:49.891360 kubelet[3154]: I0113 23:37:49.891315 3154 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:37:49.891625 kubelet[3154]: I0113 23:37:49.891609 3154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:37:49.893164 kubelet[3154]: E0113 23:37:49.893151 3154 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:37:49.893272 kubelet[3154]: E0113 23:37:49.893260 3154 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578.0.0-p-c34b1ae5c8\" not found" Jan 13 23:37:49.939063 systemd[1]: Created slice kubepods-burstable-podc0f8af149cb067c3a267de5fe8383112.slice - libcontainer container kubepods-burstable-podc0f8af149cb067c3a267de5fe8383112.slice. Jan 13 23:37:49.946897 kubelet[3154]: E0113 23:37:49.946862 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:49.949935 systemd[1]: Created slice kubepods-burstable-pod1283b8f133803532354dc186d9fad36d.slice - libcontainer container kubepods-burstable-pod1283b8f133803532354dc186d9fad36d.slice. Jan 13 23:37:49.951987 kubelet[3154]: E0113 23:37:49.951879 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:49.953471 systemd[1]: Created slice kubepods-burstable-pod3a8b1807fa0e27773748f2d3121565d5.slice - libcontainer container kubepods-burstable-pod3a8b1807fa0e27773748f2d3121565d5.slice. Jan 13 23:37:49.954895 kubelet[3154]: E0113 23:37:49.954872 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:49.993387 kubelet[3154]: I0113 23:37:49.993326 3154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:49.993760 kubelet[3154]: E0113 23:37:49.993718 3154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.021375 kubelet[3154]: E0113 23:37:50.021231 3154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c34b1ae5c8?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Jan 13 23:37:50.023420 kubelet[3154]: I0113 23:37:50.023397 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023575 kubelet[3154]: I0113 23:37:50.023559 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023719 kubelet[3154]: I0113 23:37:50.023665 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023719 kubelet[3154]: I0113 23:37:50.023690 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023719 kubelet[3154]: I0113 23:37:50.023702 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023826 kubelet[3154]: I0113 23:37:50.023816 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023952 kubelet[3154]: I0113 23:37:50.023902 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023952 kubelet[3154]: I0113 23:37:50.023917 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.023952 kubelet[3154]: I0113 23:37:50.023928 3154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a8b1807fa0e27773748f2d3121565d5-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"3a8b1807fa0e27773748f2d3121565d5\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.195867 kubelet[3154]: I0113 23:37:50.195829 3154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.196376 kubelet[3154]: E0113 23:37:50.196347 3154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.248755 containerd[2052]: time="2026-01-13T23:37:50.248716463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-c34b1ae5c8,Uid:c0f8af149cb067c3a267de5fe8383112,Namespace:kube-system,Attempt:0,}" Jan 13 23:37:50.253605 containerd[2052]: time="2026-01-13T23:37:50.253577998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8,Uid:1283b8f133803532354dc186d9fad36d,Namespace:kube-system,Attempt:0,}" Jan 13 23:37:50.256232 containerd[2052]: time="2026-01-13T23:37:50.256184879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-c34b1ae5c8,Uid:3a8b1807fa0e27773748f2d3121565d5,Namespace:kube-system,Attempt:0,}" Jan 13 23:37:50.335397 containerd[2052]: time="2026-01-13T23:37:50.335233247Z" level=info msg="connecting to shim df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3" address="unix:///run/containerd/s/c728314f5c33a0ae6d89892d7221fbcfaf1b22c57a91ba3eff6a0f6449b2b375" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:37:50.349050 containerd[2052]: time="2026-01-13T23:37:50.348951414Z" level=info msg="connecting to shim 76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80" address="unix:///run/containerd/s/b5ad01f59428fc4ec9bbbc928e387a76a45a1752e3fd69a9adf8c967797a7cfc" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:37:50.354588 containerd[2052]: time="2026-01-13T23:37:50.354554222Z" level=info msg="connecting to shim 49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3" address="unix:///run/containerd/s/b83f3025e48f10711ab88b299d26e1812957869b74723ebfdc57c2badba69394" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:37:50.367514 systemd[1]: Started cri-containerd-df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3.scope - libcontainer container df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3. Jan 13 23:37:50.381517 systemd[1]: Started cri-containerd-76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80.scope - libcontainer container 76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80. Jan 13 23:37:50.389224 systemd[1]: Started cri-containerd-49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3.scope - libcontainer container 49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3. Jan 13 23:37:50.420018 containerd[2052]: time="2026-01-13T23:37:50.419975731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-c34b1ae5c8,Uid:c0f8af149cb067c3a267de5fe8383112,Namespace:kube-system,Attempt:0,} returns sandbox id \"df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3\"" Jan 13 23:37:50.422057 kubelet[3154]: E0113 23:37:50.421992 3154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c34b1ae5c8?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Jan 13 23:37:50.424418 containerd[2052]: time="2026-01-13T23:37:50.424319336Z" level=info msg="CreateContainer within sandbox \"df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 23:37:50.442303 containerd[2052]: time="2026-01-13T23:37:50.442261703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8,Uid:1283b8f133803532354dc186d9fad36d,Namespace:kube-system,Attempt:0,} returns sandbox id \"76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80\"" Jan 13 23:37:50.447663 containerd[2052]: time="2026-01-13T23:37:50.447638896Z" level=info msg="CreateContainer within sandbox \"76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 23:37:50.451672 containerd[2052]: time="2026-01-13T23:37:50.451011900Z" level=info msg="Container 39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:37:50.454457 containerd[2052]: time="2026-01-13T23:37:50.454420753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-c34b1ae5c8,Uid:3a8b1807fa0e27773748f2d3121565d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3\"" Jan 13 23:37:50.460940 containerd[2052]: time="2026-01-13T23:37:50.460825812Z" level=info msg="CreateContainer within sandbox \"49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 23:37:50.471689 containerd[2052]: time="2026-01-13T23:37:50.471656464Z" level=info msg="CreateContainer within sandbox \"df04d50340b042cc9193a55c727fbfdcaa606bc08145746a63fca9da92442ef3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287\"" Jan 13 23:37:50.472264 containerd[2052]: time="2026-01-13T23:37:50.472242388Z" level=info msg="StartContainer for \"39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287\"" Jan 13 23:37:50.473603 containerd[2052]: time="2026-01-13T23:37:50.473559601Z" level=info msg="connecting to shim 39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287" address="unix:///run/containerd/s/c728314f5c33a0ae6d89892d7221fbcfaf1b22c57a91ba3eff6a0f6449b2b375" protocol=ttrpc version=3 Jan 13 23:37:50.485592 containerd[2052]: time="2026-01-13T23:37:50.485523444Z" level=info msg="Container 1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:37:50.488674 systemd[1]: Started cri-containerd-39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287.scope - libcontainer container 39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287. Jan 13 23:37:50.503167 containerd[2052]: time="2026-01-13T23:37:50.503095639Z" level=info msg="CreateContainer within sandbox \"76fae195d14301023e08d2c5894ba6c888d5c6f098b19fc0d4fa46416819ed80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625\"" Jan 13 23:37:50.503746 containerd[2052]: time="2026-01-13T23:37:50.503697691Z" level=info msg="StartContainer for \"1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625\"" Jan 13 23:37:50.504694 containerd[2052]: time="2026-01-13T23:37:50.504651964Z" level=info msg="connecting to shim 1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625" address="unix:///run/containerd/s/b5ad01f59428fc4ec9bbbc928e387a76a45a1752e3fd69a9adf8c967797a7cfc" protocol=ttrpc version=3 Jan 13 23:37:50.508084 containerd[2052]: time="2026-01-13T23:37:50.507652147Z" level=info msg="Container 9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:37:50.524510 systemd[1]: Started cri-containerd-1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625.scope - libcontainer container 1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625. Jan 13 23:37:50.534300 containerd[2052]: time="2026-01-13T23:37:50.534060069Z" level=info msg="StartContainer for \"39bdfc5ff61133a5d942d088039b99a5b0ecb6a60deb1bd2bd5a1985e8afc287\" returns successfully" Jan 13 23:37:50.535483 containerd[2052]: time="2026-01-13T23:37:50.534907346Z" level=info msg="CreateContainer within sandbox \"49de0271217085ec3f10bd0359103811b084d99173fa5fd8eae4ce7e32e6f8c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d\"" Jan 13 23:37:50.536116 containerd[2052]: time="2026-01-13T23:37:50.536088923Z" level=info msg="StartContainer for \"9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d\"" Jan 13 23:37:50.538596 containerd[2052]: time="2026-01-13T23:37:50.538545055Z" level=info msg="connecting to shim 9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d" address="unix:///run/containerd/s/b83f3025e48f10711ab88b299d26e1812957869b74723ebfdc57c2badba69394" protocol=ttrpc version=3 Jan 13 23:37:50.565622 systemd[1]: Started cri-containerd-9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d.scope - libcontainer container 9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d. Jan 13 23:37:50.593743 containerd[2052]: time="2026-01-13T23:37:50.593645817Z" level=info msg="StartContainer for \"1c355d48cc104fb453bd9a7737073db4c8d1f243d406bc204d7309fc1eac2625\" returns successfully" Jan 13 23:37:50.602256 kubelet[3154]: I0113 23:37:50.602214 3154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.633452 containerd[2052]: time="2026-01-13T23:37:50.633420702Z" level=info msg="StartContainer for \"9ad3fabe97e179df038856a42b4dd0f37aa99db79f8d024b51ae1e6c5196f76d\" returns successfully" Jan 13 23:37:50.854813 kubelet[3154]: E0113 23:37:50.854713 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.861208 kubelet[3154]: E0113 23:37:50.860997 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:50.863992 kubelet[3154]: E0113 23:37:50.863969 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:51.865881 kubelet[3154]: E0113 23:37:51.865846 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:51.866188 kubelet[3154]: E0113 23:37:51.866118 3154 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:51.985942 kubelet[3154]: E0113 23:37:51.985903 3154 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578.0.0-p-c34b1ae5c8\" not found" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.094920 kubelet[3154]: I0113 23:37:52.094880 3154 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.121316 kubelet[3154]: I0113 23:37:52.120445 3154 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.141844 kubelet[3154]: E0113 23:37:52.141771 3154 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.142326 kubelet[3154]: I0113 23:37:52.141805 3154 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.144874 kubelet[3154]: E0113 23:37:52.144830 3154 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-c34b1ae5c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.144874 kubelet[3154]: I0113 23:37:52.144850 3154 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.146200 kubelet[3154]: E0113 23:37:52.146177 3154 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:52.809256 kubelet[3154]: I0113 23:37:52.809135 3154 apiserver.go:52] "Watching apiserver" Jan 13 23:37:52.822312 kubelet[3154]: I0113 23:37:52.822270 3154 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:37:54.307410 systemd[1]: Reload requested from client PID 3420 ('systemctl') (unit session-8.scope)... Jan 13 23:37:54.307424 systemd[1]: Reloading... Jan 13 23:37:54.396437 zram_generator::config[3470]: No configuration found. Jan 13 23:37:54.531366 kubelet[3154]: I0113 23:37:54.531237 3154 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:54.541429 kubelet[3154]: W0113 23:37:54.541325 3154 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:54.566613 systemd[1]: Reloading finished in 258 ms. Jan 13 23:37:54.589157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:54.603163 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:37:54.603851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:54.603996 systemd[1]: kubelet.service: Consumed 777ms CPU time, 126.4M memory peak. Jan 13 23:37:54.606182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:37:54.781749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:37:54.791583 (kubelet)[3534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:37:54.897078 kubelet[3534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:37:54.897078 kubelet[3534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:37:54.897078 kubelet[3534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:37:54.897413 kubelet[3534]: I0113 23:37:54.897071 3534 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:37:54.903424 kubelet[3534]: I0113 23:37:54.903392 3534 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:37:54.903424 kubelet[3534]: I0113 23:37:54.903417 3534 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:37:54.906811 kubelet[3534]: I0113 23:37:54.906122 3534 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:37:54.907021 kubelet[3534]: I0113 23:37:54.907002 3534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 23:37:54.909356 kubelet[3534]: I0113 23:37:54.909230 3534 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:37:54.912403 kubelet[3534]: I0113 23:37:54.912386 3534 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:37:54.915021 kubelet[3534]: I0113 23:37:54.914902 3534 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:37:54.915092 kubelet[3534]: I0113 23:37:54.915049 3534 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:37:54.915183 kubelet[3534]: I0113 23:37:54.915068 3534 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-c34b1ae5c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:37:54.915267 kubelet[3534]: I0113 23:37:54.915184 3534 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:37:54.915267 kubelet[3534]: I0113 23:37:54.915191 3534 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:37:54.915267 kubelet[3534]: I0113 23:37:54.915227 3534 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:37:54.915695 kubelet[3534]: I0113 23:37:54.915316 3534 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:37:54.915695 kubelet[3534]: I0113 23:37:54.915324 3534 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:37:54.915695 kubelet[3534]: I0113 23:37:54.915356 3534 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:37:54.915695 kubelet[3534]: I0113 23:37:54.915365 3534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:37:54.916054 kubelet[3534]: I0113 23:37:54.916040 3534 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:37:54.916410 kubelet[3534]: I0113 23:37:54.916393 3534 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:37:54.916779 kubelet[3534]: I0113 23:37:54.916765 3534 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:37:54.916864 kubelet[3534]: I0113 23:37:54.916856 3534 server.go:1287] "Started kubelet" Jan 13 23:37:54.923669 kubelet[3534]: I0113 23:37:54.923316 3534 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:37:54.924489 kubelet[3534]: I0113 23:37:54.924476 3534 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:37:54.925407 kubelet[3534]: I0113 23:37:54.925371 3534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:37:54.925746 kubelet[3534]: I0113 23:37:54.925714 3534 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:37:54.926873 kubelet[3534]: I0113 23:37:54.926858 3534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:37:54.936594 kubelet[3534]: E0113 23:37:54.936569 3534 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:37:54.939856 kubelet[3534]: I0113 23:37:54.939822 3534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:37:54.940010 kubelet[3534]: I0113 23:37:54.939997 3534 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:37:54.942818 kubelet[3534]: I0113 23:37:54.942799 3534 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:37:54.943003 kubelet[3534]: I0113 23:37:54.942992 3534 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:37:54.944401 kubelet[3534]: I0113 23:37:54.944328 3534 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:37:54.945937 kubelet[3534]: I0113 23:37:54.944526 3534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:37:54.945937 kubelet[3534]: I0113 23:37:54.945846 3534 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:37:54.947078 kubelet[3534]: I0113 23:37:54.946276 3534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:37:54.949213 kubelet[3534]: I0113 23:37:54.949195 3534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:37:54.949310 kubelet[3534]: I0113 23:37:54.949301 3534 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:37:54.949461 kubelet[3534]: I0113 23:37:54.949447 3534 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:37:54.949519 kubelet[3534]: I0113 23:37:54.949511 3534 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:37:54.949602 kubelet[3534]: E0113 23:37:54.949582 3534 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:37:54.988895 kubelet[3534]: I0113 23:37:54.988862 3534 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:37:54.988895 kubelet[3534]: I0113 23:37:54.988880 3534 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:37:54.988895 kubelet[3534]: I0113 23:37:54.988904 3534 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:37:54.989069 kubelet[3534]: I0113 23:37:54.989061 3534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 23:37:54.989086 kubelet[3534]: I0113 23:37:54.989071 3534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 23:37:54.989086 kubelet[3534]: I0113 23:37:54.989086 3534 policy_none.go:49] "None policy: Start" Jan 13 23:37:54.989115 kubelet[3534]: I0113 23:37:54.989094 3534 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:37:54.989115 kubelet[3534]: I0113 23:37:54.989102 3534 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:37:54.989187 kubelet[3534]: I0113 23:37:54.989167 3534 state_mem.go:75] "Updated machine memory state" Jan 13 23:37:54.992658 kubelet[3534]: I0113 23:37:54.992630 3534 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:37:54.993531 kubelet[3534]: I0113 23:37:54.993479 3534 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:37:54.993734 kubelet[3534]: I0113 23:37:54.993691 3534 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:37:54.994428 kubelet[3534]: I0113 23:37:54.994139 3534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:37:54.997785 kubelet[3534]: E0113 23:37:54.997700 3534 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:37:55.050829 kubelet[3534]: I0113 23:37:55.050785 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.051090 kubelet[3534]: I0113 23:37:55.050800 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.051192 kubelet[3534]: I0113 23:37:55.051178 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.058599 kubelet[3534]: W0113 23:37:55.058442 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:55.064351 kubelet[3534]: W0113 23:37:55.064285 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:55.064935 kubelet[3534]: W0113 23:37:55.064880 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:55.065027 kubelet[3534]: E0113 23:37:55.064921 3534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.102758 kubelet[3534]: I0113 23:37:55.102461 3534 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.115590 kubelet[3534]: I0113 23:37:55.115453 3534 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.115590 kubelet[3534]: I0113 23:37:55.115573 3534 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144144 kubelet[3534]: I0113 23:37:55.144079 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144144 kubelet[3534]: I0113 23:37:55.144113 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144144 kubelet[3534]: I0113 23:37:55.144126 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144476 kubelet[3534]: I0113 23:37:55.144392 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0f8af149cb067c3a267de5fe8383112-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"c0f8af149cb067c3a267de5fe8383112\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144476 kubelet[3534]: I0113 23:37:55.144437 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144476 kubelet[3534]: I0113 23:37:55.144448 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144667 kubelet[3534]: I0113 23:37:55.144561 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144667 kubelet[3534]: I0113 23:37:55.144587 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a8b1807fa0e27773748f2d3121565d5-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"3a8b1807fa0e27773748f2d3121565d5\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.144667 kubelet[3534]: I0113 23:37:55.144599 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1283b8f133803532354dc186d9fad36d-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" (UID: \"1283b8f133803532354dc186d9fad36d\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.924082 kubelet[3534]: I0113 23:37:55.923860 3534 apiserver.go:52] "Watching apiserver" Jan 13 23:37:55.943796 kubelet[3534]: I0113 23:37:55.943744 3534 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:37:55.974590 kubelet[3534]: I0113 23:37:55.974238 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.975038 kubelet[3534]: I0113 23:37:55.974926 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.975241 kubelet[3534]: I0113 23:37:55.975050 3534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:55.998550 kubelet[3534]: W0113 23:37:55.998348 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:55.998550 kubelet[3534]: E0113 23:37:55.998407 3534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-c34b1ae5c8\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:56.000127 kubelet[3534]: I0113 23:37:55.999984 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" podStartSLOduration=0.999973505 podStartE2EDuration="999.973505ms" podCreationTimestamp="2026-01-13 23:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:37:55.998435468 +0000 UTC m=+1.204574595" watchObservedRunningTime="2026-01-13 23:37:55.999973505 +0000 UTC m=+1.206112640" Jan 13 23:37:56.009593 kubelet[3534]: W0113 23:37:56.009495 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:56.009593 kubelet[3534]: E0113 23:37:56.009543 3534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8\" already exists" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:56.010038 kubelet[3534]: W0113 23:37:56.009494 3534 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 23:37:56.010038 kubelet[3534]: E0113 23:37:56.009701 3534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-c34b1ae5c8\" already exists" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" Jan 13 23:37:56.019810 kubelet[3534]: I0113 23:37:56.019758 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c34b1ae5c8" podStartSLOduration=2.019745376 podStartE2EDuration="2.019745376s" podCreationTimestamp="2026-01-13 23:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:37:56.019740783 +0000 UTC m=+1.225879998" watchObservedRunningTime="2026-01-13 23:37:56.019745376 +0000 UTC m=+1.225884503" Jan 13 23:37:56.039277 kubelet[3534]: I0113 23:37:56.038849 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c34b1ae5c8" podStartSLOduration=1.038837103 podStartE2EDuration="1.038837103s" podCreationTimestamp="2026-01-13 23:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:37:56.038635536 +0000 UTC m=+1.244774671" watchObservedRunningTime="2026-01-13 23:37:56.038837103 +0000 UTC m=+1.244976230" Jan 13 23:37:56.106850 sudo[2508]: pam_unix(sudo:session): session closed for user root Jan 13 23:37:56.177872 sshd[2507]: Connection closed by 10.200.16.10 port 58928 Jan 13 23:37:56.177697 sshd-session[2503]: pam_unix(sshd:session): session closed for user core Jan 13 23:37:56.182422 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:58928.service: Deactivated successfully. Jan 13 23:37:56.184840 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 23:37:56.185084 systemd[1]: session-8.scope: Consumed 2.732s CPU time, 218.4M memory peak. Jan 13 23:37:56.187270 systemd-logind[2030]: Session 8 logged out. Waiting for processes to exit. Jan 13 23:37:56.188313 systemd-logind[2030]: Removed session 8. Jan 13 23:38:00.265994 kubelet[3534]: I0113 23:38:00.265912 3534 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 23:38:00.266956 containerd[2052]: time="2026-01-13T23:38:00.266550641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 23:38:00.267175 kubelet[3534]: I0113 23:38:00.266758 3534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 23:38:00.875168 systemd[1]: Created slice kubepods-besteffort-pod3796fa5b_0a8c_4b57_bdbd_8255e6d83c68.slice - libcontainer container kubepods-besteffort-pod3796fa5b_0a8c_4b57_bdbd_8255e6d83c68.slice. Jan 13 23:38:00.880385 kubelet[3534]: I0113 23:38:00.878878 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-xtables-lock\") pod \"kube-proxy-x9h87\" (UID: \"3796fa5b-0a8c-4b57-bdbd-8255e6d83c68\") " pod="kube-system/kube-proxy-x9h87" Jan 13 23:38:00.880385 kubelet[3534]: I0113 23:38:00.878909 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-lib-modules\") pod \"kube-proxy-x9h87\" (UID: \"3796fa5b-0a8c-4b57-bdbd-8255e6d83c68\") " pod="kube-system/kube-proxy-x9h87" Jan 13 23:38:00.880385 kubelet[3534]: I0113 23:38:00.878924 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-kube-proxy\") pod \"kube-proxy-x9h87\" (UID: \"3796fa5b-0a8c-4b57-bdbd-8255e6d83c68\") " pod="kube-system/kube-proxy-x9h87" Jan 13 23:38:00.880385 kubelet[3534]: I0113 23:38:00.878937 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5v6\" (UniqueName: \"kubernetes.io/projected/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-kube-api-access-zx5v6\") pod \"kube-proxy-x9h87\" (UID: \"3796fa5b-0a8c-4b57-bdbd-8255e6d83c68\") " pod="kube-system/kube-proxy-x9h87" Jan 13 23:38:00.893031 systemd[1]: Created slice kubepods-burstable-pod3fbf1817_ee29_4815_a456_25e552a12fee.slice - libcontainer container kubepods-burstable-pod3fbf1817_ee29_4815_a456_25e552a12fee.slice. Jan 13 23:38:00.980461 kubelet[3534]: I0113 23:38:00.979980 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fbf1817-ee29-4815-a456-25e552a12fee-xtables-lock\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.980611 kubelet[3534]: I0113 23:38:00.980491 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3fbf1817-ee29-4815-a456-25e552a12fee-cni-plugin\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.982390 kubelet[3534]: I0113 23:38:00.982363 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3fbf1817-ee29-4815-a456-25e552a12fee-cni\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.982659 kubelet[3534]: I0113 23:38:00.982407 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3fbf1817-ee29-4815-a456-25e552a12fee-run\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.982659 kubelet[3534]: I0113 23:38:00.982563 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48br\" (UniqueName: \"kubernetes.io/projected/3fbf1817-ee29-4815-a456-25e552a12fee-kube-api-access-j48br\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.982659 kubelet[3534]: I0113 23:38:00.982631 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3fbf1817-ee29-4815-a456-25e552a12fee-flannel-cfg\") pod \"kube-flannel-ds-2xrcq\" (UID: \"3fbf1817-ee29-4815-a456-25e552a12fee\") " pod="kube-flannel/kube-flannel-ds-2xrcq" Jan 13 23:38:00.984753 kubelet[3534]: E0113 23:38:00.984726 3534 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 23:38:00.984753 kubelet[3534]: E0113 23:38:00.984751 3534 projected.go:194] Error preparing data for projected volume kube-api-access-zx5v6 for pod kube-system/kube-proxy-x9h87: configmap "kube-root-ca.crt" not found Jan 13 23:38:00.984842 kubelet[3534]: E0113 23:38:00.984797 3534 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-kube-api-access-zx5v6 podName:3796fa5b-0a8c-4b57-bdbd-8255e6d83c68 nodeName:}" failed. No retries permitted until 2026-01-13 23:38:01.484780812 +0000 UTC m=+6.690919939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zx5v6" (UniqueName: "kubernetes.io/projected/3796fa5b-0a8c-4b57-bdbd-8255e6d83c68-kube-api-access-zx5v6") pod "kube-proxy-x9h87" (UID: "3796fa5b-0a8c-4b57-bdbd-8255e6d83c68") : configmap "kube-root-ca.crt" not found Jan 13 23:38:01.198302 containerd[2052]: time="2026-01-13T23:38:01.198249031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2xrcq,Uid:3fbf1817-ee29-4815-a456-25e552a12fee,Namespace:kube-flannel,Attempt:0,}" Jan 13 23:38:01.240611 containerd[2052]: time="2026-01-13T23:38:01.240447687Z" level=info msg="connecting to shim f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975" address="unix:///run/containerd/s/52552f970a4becfb1aed8a2eaffa6c6004af8a8092d293de51d94f19a1d4635b" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:38:01.262529 systemd[1]: Started cri-containerd-f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975.scope - libcontainer container f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975. Jan 13 23:38:01.296732 containerd[2052]: time="2026-01-13T23:38:01.296634881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2xrcq,Uid:3fbf1817-ee29-4815-a456-25e552a12fee,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\"" Jan 13 23:38:01.300300 containerd[2052]: time="2026-01-13T23:38:01.300262308Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 23:38:01.783694 containerd[2052]: time="2026-01-13T23:38:01.783655959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9h87,Uid:3796fa5b-0a8c-4b57-bdbd-8255e6d83c68,Namespace:kube-system,Attempt:0,}" Jan 13 23:38:01.831752 containerd[2052]: time="2026-01-13T23:38:01.831687164Z" level=info msg="connecting to shim 6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799" address="unix:///run/containerd/s/98ccb7bae8c6c9df1853547f06b04fd14d094746500c1fa98d5a8c44b7367e41" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:38:01.851534 systemd[1]: Started cri-containerd-6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799.scope - libcontainer container 6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799. Jan 13 23:38:01.873769 containerd[2052]: time="2026-01-13T23:38:01.873702054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9h87,Uid:3796fa5b-0a8c-4b57-bdbd-8255e6d83c68,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799\"" Jan 13 23:38:01.876870 containerd[2052]: time="2026-01-13T23:38:01.876833400Z" level=info msg="CreateContainer within sandbox \"6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 23:38:01.899054 containerd[2052]: time="2026-01-13T23:38:01.898355146Z" level=info msg="Container 7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:01.927621 containerd[2052]: time="2026-01-13T23:38:01.927573634Z" level=info msg="CreateContainer within sandbox \"6dc283c71081fc5ad6f5244b130896a6d7744386616c7065e4a10f6e059e9799\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c\"" Jan 13 23:38:01.928587 containerd[2052]: time="2026-01-13T23:38:01.928544139Z" level=info msg="StartContainer for \"7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c\"" Jan 13 23:38:01.929960 containerd[2052]: time="2026-01-13T23:38:01.929931650Z" level=info msg="connecting to shim 7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c" address="unix:///run/containerd/s/98ccb7bae8c6c9df1853547f06b04fd14d094746500c1fa98d5a8c44b7367e41" protocol=ttrpc version=3 Jan 13 23:38:01.954550 systemd[1]: Started cri-containerd-7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c.scope - libcontainer container 7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c. Jan 13 23:38:02.009894 containerd[2052]: time="2026-01-13T23:38:02.009854074Z" level=info msg="StartContainer for \"7b38816218c7024f0bfa0c5ff9cde3a274e350dbb2b740bccf5bfe315534b66c\" returns successfully" Jan 13 23:38:03.194534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283424547.mount: Deactivated successfully. Jan 13 23:38:03.264354 containerd[2052]: time="2026-01-13T23:38:03.264094599Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:03.266875 containerd[2052]: time="2026-01-13T23:38:03.266838108Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Jan 13 23:38:03.270700 containerd[2052]: time="2026-01-13T23:38:03.270649653Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:03.275070 containerd[2052]: time="2026-01-13T23:38:03.274648405Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:03.275070 containerd[2052]: time="2026-01-13T23:38:03.274990952Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.974690891s" Jan 13 23:38:03.275070 containerd[2052]: time="2026-01-13T23:38:03.275014089Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 23:38:03.277485 containerd[2052]: time="2026-01-13T23:38:03.277360265Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 23:38:03.299837 containerd[2052]: time="2026-01-13T23:38:03.299789426Z" level=info msg="Container 01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:03.313061 containerd[2052]: time="2026-01-13T23:38:03.313017867Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d\"" Jan 13 23:38:03.313873 containerd[2052]: time="2026-01-13T23:38:03.313847095Z" level=info msg="StartContainer for \"01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d\"" Jan 13 23:38:03.314756 containerd[2052]: time="2026-01-13T23:38:03.314510077Z" level=info msg="connecting to shim 01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d" address="unix:///run/containerd/s/52552f970a4becfb1aed8a2eaffa6c6004af8a8092d293de51d94f19a1d4635b" protocol=ttrpc version=3 Jan 13 23:38:03.333506 systemd[1]: Started cri-containerd-01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d.scope - libcontainer container 01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d. Jan 13 23:38:03.357229 systemd[1]: cri-containerd-01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d.scope: Deactivated successfully. Jan 13 23:38:03.360285 containerd[2052]: time="2026-01-13T23:38:03.360235429Z" level=info msg="received container exit event container_id:\"01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d\" id:\"01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d\" pid:3869 exited_at:{seconds:1768347483 nanos:357114123}" Jan 13 23:38:03.367440 containerd[2052]: time="2026-01-13T23:38:03.367406952Z" level=info msg="StartContainer for \"01868a9c260fbe8b30b25fd7ab54d80f912dc0d5307fb15c11b09d6384c0526d\" returns successfully" Jan 13 23:38:04.002268 containerd[2052]: time="2026-01-13T23:38:04.002216524Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 23:38:04.018356 kubelet[3534]: I0113 23:38:04.018128 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x9h87" podStartSLOduration=4.018111303 podStartE2EDuration="4.018111303s" podCreationTimestamp="2026-01-13 23:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:38:03.013078641 +0000 UTC m=+8.219217816" watchObservedRunningTime="2026-01-13 23:38:04.018111303 +0000 UTC m=+9.224250430" Jan 13 23:38:06.256411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829151960.mount: Deactivated successfully. Jan 13 23:38:06.902105 containerd[2052]: time="2026-01-13T23:38:06.902062016Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:06.905241 containerd[2052]: time="2026-01-13T23:38:06.905195696Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=13999812" Jan 13 23:38:06.909047 containerd[2052]: time="2026-01-13T23:38:06.909003454Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:06.913447 containerd[2052]: time="2026-01-13T23:38:06.913401560Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:38:06.914262 containerd[2052]: time="2026-01-13T23:38:06.914026245Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.911505334s" Jan 13 23:38:06.914262 containerd[2052]: time="2026-01-13T23:38:06.914056550Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 23:38:06.917249 containerd[2052]: time="2026-01-13T23:38:06.916660012Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 23:38:06.935187 containerd[2052]: time="2026-01-13T23:38:06.935157626Z" level=info msg="Container 24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:06.936092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836963418.mount: Deactivated successfully. Jan 13 23:38:06.953493 containerd[2052]: time="2026-01-13T23:38:06.953285683Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366\"" Jan 13 23:38:06.953967 containerd[2052]: time="2026-01-13T23:38:06.953951001Z" level=info msg="StartContainer for \"24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366\"" Jan 13 23:38:06.954783 containerd[2052]: time="2026-01-13T23:38:06.954762980Z" level=info msg="connecting to shim 24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366" address="unix:///run/containerd/s/52552f970a4becfb1aed8a2eaffa6c6004af8a8092d293de51d94f19a1d4635b" protocol=ttrpc version=3 Jan 13 23:38:06.975531 systemd[1]: Started cri-containerd-24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366.scope - libcontainer container 24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366. Jan 13 23:38:07.005464 systemd[1]: cri-containerd-24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366.scope: Deactivated successfully. Jan 13 23:38:07.011283 containerd[2052]: time="2026-01-13T23:38:07.011246687Z" level=info msg="received container exit event container_id:\"24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366\" id:\"24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366\" pid:3948 exited_at:{seconds:1768347487 nanos:6747145}" Jan 13 23:38:07.012753 containerd[2052]: time="2026-01-13T23:38:07.012722487Z" level=info msg="StartContainer for \"24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366\" returns successfully" Jan 13 23:38:07.066401 kubelet[3534]: I0113 23:38:07.066007 3534 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 13 23:38:07.107385 systemd[1]: Created slice kubepods-burstable-poda1b87d20_949f_4f7a_b970_8ac419f33500.slice - libcontainer container kubepods-burstable-poda1b87d20_949f_4f7a_b970_8ac419f33500.slice. Jan 13 23:38:07.115144 kubelet[3534]: I0113 23:38:07.113824 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b87d20-949f-4f7a-b970-8ac419f33500-config-volume\") pod \"coredns-668d6bf9bc-krb8j\" (UID: \"a1b87d20-949f-4f7a-b970-8ac419f33500\") " pod="kube-system/coredns-668d6bf9bc-krb8j" Jan 13 23:38:07.115144 kubelet[3534]: I0113 23:38:07.113868 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26d9\" (UniqueName: \"kubernetes.io/projected/a1b87d20-949f-4f7a-b970-8ac419f33500-kube-api-access-w26d9\") pod \"coredns-668d6bf9bc-krb8j\" (UID: \"a1b87d20-949f-4f7a-b970-8ac419f33500\") " pod="kube-system/coredns-668d6bf9bc-krb8j" Jan 13 23:38:07.115144 kubelet[3534]: I0113 23:38:07.113886 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a53f8946-5f7a-4b8c-8f67-f995b4c74286-config-volume\") pod \"coredns-668d6bf9bc-lpg8p\" (UID: \"a53f8946-5f7a-4b8c-8f67-f995b4c74286\") " pod="kube-system/coredns-668d6bf9bc-lpg8p" Jan 13 23:38:07.115144 kubelet[3534]: I0113 23:38:07.113905 3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkxhj\" (UniqueName: \"kubernetes.io/projected/a53f8946-5f7a-4b8c-8f67-f995b4c74286-kube-api-access-mkxhj\") pod \"coredns-668d6bf9bc-lpg8p\" (UID: \"a53f8946-5f7a-4b8c-8f67-f995b4c74286\") " pod="kube-system/coredns-668d6bf9bc-lpg8p" Jan 13 23:38:07.116009 systemd[1]: Created slice kubepods-burstable-poda53f8946_5f7a_4b8c_8f67_f995b4c74286.slice - libcontainer container kubepods-burstable-poda53f8946_5f7a_4b8c_8f67_f995b4c74286.slice. Jan 13 23:38:07.190268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f8ee69a475869d71ec42d6e44d5b96ff67fdd3a8569910c1aca735e44b0366-rootfs.mount: Deactivated successfully. Jan 13 23:38:07.414127 containerd[2052]: time="2026-01-13T23:38:07.414078493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krb8j,Uid:a1b87d20-949f-4f7a-b970-8ac419f33500,Namespace:kube-system,Attempt:0,}" Jan 13 23:38:07.419200 containerd[2052]: time="2026-01-13T23:38:07.419163742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpg8p,Uid:a53f8946-5f7a-4b8c-8f67-f995b4c74286,Namespace:kube-system,Attempt:0,}" Jan 13 23:38:07.567880 containerd[2052]: time="2026-01-13T23:38:07.567575666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krb8j,Uid:a1b87d20-949f-4f7a-b970-8ac419f33500,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6896ab842b5158fcddf90a4baef9468cfa4e5aa386b75964c25390d751d525b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 23:38:07.568126 kubelet[3534]: E0113 23:38:07.567821 3534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6896ab842b5158fcddf90a4baef9468cfa4e5aa386b75964c25390d751d525b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 23:38:07.568126 kubelet[3534]: E0113 23:38:07.567919 3534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6896ab842b5158fcddf90a4baef9468cfa4e5aa386b75964c25390d751d525b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-krb8j" Jan 13 23:38:07.568126 kubelet[3534]: E0113 23:38:07.567935 3534 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6896ab842b5158fcddf90a4baef9468cfa4e5aa386b75964c25390d751d525b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-krb8j" Jan 13 23:38:07.568126 kubelet[3534]: E0113 23:38:07.567973 3534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-krb8j_kube-system(a1b87d20-949f-4f7a-b970-8ac419f33500)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-krb8j_kube-system(a1b87d20-949f-4f7a-b970-8ac419f33500)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6896ab842b5158fcddf90a4baef9468cfa4e5aa386b75964c25390d751d525b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-krb8j" podUID="a1b87d20-949f-4f7a-b970-8ac419f33500" Jan 13 23:38:07.573402 containerd[2052]: time="2026-01-13T23:38:07.573359162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpg8p,Uid:a53f8946-5f7a-4b8c-8f67-f995b4c74286,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa73ee12dce7f81dc751680325c93d80c40fdddb0e0dd73214d756c15f78eba3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 23:38:07.573613 kubelet[3534]: E0113 23:38:07.573562 3534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa73ee12dce7f81dc751680325c93d80c40fdddb0e0dd73214d756c15f78eba3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 23:38:07.573645 kubelet[3534]: E0113 23:38:07.573626 3534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa73ee12dce7f81dc751680325c93d80c40fdddb0e0dd73214d756c15f78eba3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-lpg8p" Jan 13 23:38:07.573645 kubelet[3534]: E0113 23:38:07.573641 3534 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa73ee12dce7f81dc751680325c93d80c40fdddb0e0dd73214d756c15f78eba3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-lpg8p" Jan 13 23:38:07.573715 kubelet[3534]: E0113 23:38:07.573676 3534 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lpg8p_kube-system(a53f8946-5f7a-4b8c-8f67-f995b4c74286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lpg8p_kube-system(a53f8946-5f7a-4b8c-8f67-f995b4c74286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa73ee12dce7f81dc751680325c93d80c40fdddb0e0dd73214d756c15f78eba3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-lpg8p" podUID="a53f8946-5f7a-4b8c-8f67-f995b4c74286" Jan 13 23:38:08.021739 containerd[2052]: time="2026-01-13T23:38:08.021629788Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 23:38:08.041776 containerd[2052]: time="2026-01-13T23:38:08.041735760Z" level=info msg="Container ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:08.054495 containerd[2052]: time="2026-01-13T23:38:08.054457254Z" level=info msg="CreateContainer within sandbox \"f8344375e1f05308e8d0829a715c769218d270e9ddbcb24a603ebcce55cdb975\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06\"" Jan 13 23:38:08.055152 containerd[2052]: time="2026-01-13T23:38:08.055127372Z" level=info msg="StartContainer for \"ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06\"" Jan 13 23:38:08.056245 containerd[2052]: time="2026-01-13T23:38:08.056206200Z" level=info msg="connecting to shim ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06" address="unix:///run/containerd/s/52552f970a4becfb1aed8a2eaffa6c6004af8a8092d293de51d94f19a1d4635b" protocol=ttrpc version=3 Jan 13 23:38:08.072514 systemd[1]: Started cri-containerd-ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06.scope - libcontainer container ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06. Jan 13 23:38:08.100076 containerd[2052]: time="2026-01-13T23:38:08.100038942Z" level=info msg="StartContainer for \"ca9e8217a28a21748f7129c495f8de60d036bdd0a76ad3371e8539ddff328d06\" returns successfully" Jan 13 23:38:08.191140 systemd[1]: run-netns-cni\x2dbba6542c\x2d10f1\x2d1ede\x2da540\x2d1fcb1d4bb5a4.mount: Deactivated successfully. Jan 13 23:38:08.191209 systemd[1]: run-netns-cni\x2dc3ad6435\x2da56f\x2dbb28\x2db917\x2d5fc16108c64c.mount: Deactivated successfully. Jan 13 23:38:09.203855 systemd-networkd[1637]: flannel.1: Link UP Jan 13 23:38:09.203866 systemd-networkd[1637]: flannel.1: Gained carrier Jan 13 23:38:11.209523 systemd-networkd[1637]: flannel.1: Gained IPv6LL Jan 13 23:38:19.951366 containerd[2052]: time="2026-01-13T23:38:19.951245351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpg8p,Uid:a53f8946-5f7a-4b8c-8f67-f995b4c74286,Namespace:kube-system,Attempt:0,}" Jan 13 23:38:19.979701 systemd-networkd[1637]: cni0: Link UP Jan 13 23:38:19.979706 systemd-networkd[1637]: cni0: Gained carrier Jan 13 23:38:19.984475 systemd-networkd[1637]: cni0: Lost carrier Jan 13 23:38:19.999832 systemd-networkd[1637]: veth4cb6acec: Link UP Jan 13 23:38:20.007497 kernel: cni0: port 1(veth4cb6acec) entered blocking state Jan 13 23:38:20.010421 kernel: cni0: port 1(veth4cb6acec) entered disabled state Jan 13 23:38:20.010452 kernel: veth4cb6acec: entered allmulticast mode Jan 13 23:38:20.015320 kernel: veth4cb6acec: entered promiscuous mode Jan 13 23:38:20.025375 kernel: cni0: port 1(veth4cb6acec) entered blocking state Jan 13 23:38:20.025479 kernel: cni0: port 1(veth4cb6acec) entered forwarding state Jan 13 23:38:20.025861 systemd-networkd[1637]: veth4cb6acec: Gained carrier Jan 13 23:38:20.026715 systemd-networkd[1637]: cni0: Gained carrier Jan 13 23:38:20.029161 containerd[2052]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Jan 13 23:38:20.029161 containerd[2052]: delegateAdd: netconf sent to delegate plugin: Jan 13 23:38:20.070806 containerd[2052]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-13T23:38:20.070763954Z" level=info msg="connecting to shim b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0" address="unix:///run/containerd/s/6e6c2ab9dc982c15e85f7c4a831f529cb551b320cf0078b6d0fb42be53041321" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:38:20.092555 systemd[1]: Started cri-containerd-b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0.scope - libcontainer container b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0. Jan 13 23:38:20.123547 containerd[2052]: time="2026-01-13T23:38:20.123491240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpg8p,Uid:a53f8946-5f7a-4b8c-8f67-f995b4c74286,Namespace:kube-system,Attempt:0,} returns sandbox id \"b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0\"" Jan 13 23:38:20.126354 containerd[2052]: time="2026-01-13T23:38:20.126287790Z" level=info msg="CreateContainer within sandbox \"b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:38:20.148295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165648847.mount: Deactivated successfully. Jan 13 23:38:20.150358 containerd[2052]: time="2026-01-13T23:38:20.150290242Z" level=info msg="Container 19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:20.163022 containerd[2052]: time="2026-01-13T23:38:20.162900312Z" level=info msg="CreateContainer within sandbox \"b879e9225be149b427642a5451d0402fcc866f89aa093cf93efa066456bf1ff0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80\"" Jan 13 23:38:20.163895 containerd[2052]: time="2026-01-13T23:38:20.163684987Z" level=info msg="StartContainer for \"19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80\"" Jan 13 23:38:20.164410 containerd[2052]: time="2026-01-13T23:38:20.164388322Z" level=info msg="connecting to shim 19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80" address="unix:///run/containerd/s/6e6c2ab9dc982c15e85f7c4a831f529cb551b320cf0078b6d0fb42be53041321" protocol=ttrpc version=3 Jan 13 23:38:20.184552 systemd[1]: Started cri-containerd-19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80.scope - libcontainer container 19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80. Jan 13 23:38:20.214978 containerd[2052]: time="2026-01-13T23:38:20.214723680Z" level=info msg="StartContainer for \"19d29eff06d40155feb946b1abb80c9e23d40f64f604b8801dae4f20d05fce80\" returns successfully" Jan 13 23:38:21.055990 kubelet[3534]: I0113 23:38:21.055829 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-2xrcq" podStartSLOduration=15.440689492 podStartE2EDuration="21.055812289s" podCreationTimestamp="2026-01-13 23:38:00 +0000 UTC" firstStartedPulling="2026-01-13 23:38:01.2997739 +0000 UTC m=+6.505913027" lastFinishedPulling="2026-01-13 23:38:06.914896697 +0000 UTC m=+12.121035824" observedRunningTime="2026-01-13 23:38:09.041576599 +0000 UTC m=+14.247715726" watchObservedRunningTime="2026-01-13 23:38:21.055812289 +0000 UTC m=+26.261951424" Jan 13 23:38:21.058239 kubelet[3534]: I0113 23:38:21.057777 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lpg8p" podStartSLOduration=20.057764458 podStartE2EDuration="20.057764458s" podCreationTimestamp="2026-01-13 23:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:38:21.057584356 +0000 UTC m=+26.263723491" watchObservedRunningTime="2026-01-13 23:38:21.057764458 +0000 UTC m=+26.263903593" Jan 13 23:38:21.193559 systemd-networkd[1637]: cni0: Gained IPv6LL Jan 13 23:38:21.194119 systemd-networkd[1637]: veth4cb6acec: Gained IPv6LL Jan 13 23:38:22.952875 containerd[2052]: time="2026-01-13T23:38:22.952607826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krb8j,Uid:a1b87d20-949f-4f7a-b970-8ac419f33500,Namespace:kube-system,Attempt:0,}" Jan 13 23:38:22.972412 systemd-networkd[1637]: veth4890a40d: Link UP Jan 13 23:38:22.980427 kernel: cni0: port 2(veth4890a40d) entered blocking state Jan 13 23:38:22.980518 kernel: cni0: port 2(veth4890a40d) entered disabled state Jan 13 23:38:22.984115 kernel: veth4890a40d: entered allmulticast mode Jan 13 23:38:22.988357 kernel: veth4890a40d: entered promiscuous mode Jan 13 23:38:22.999397 kernel: cni0: port 2(veth4890a40d) entered blocking state Jan 13 23:38:22.999878 kernel: cni0: port 2(veth4890a40d) entered forwarding state Jan 13 23:38:22.999519 systemd-networkd[1637]: veth4890a40d: Gained carrier Jan 13 23:38:23.001706 containerd[2052]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016928), "name":"cbr0", "type":"bridge"} Jan 13 23:38:23.001706 containerd[2052]: delegateAdd: netconf sent to delegate plugin: Jan 13 23:38:23.043753 containerd[2052]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-13T23:38:23.043707197Z" level=info msg="connecting to shim d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b" address="unix:///run/containerd/s/35c8210606cc93256c7a5897752063679916d8aef5a2379005d7b14db5d0060b" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:38:23.067526 systemd[1]: Started cri-containerd-d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b.scope - libcontainer container d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b. Jan 13 23:38:23.098531 containerd[2052]: time="2026-01-13T23:38:23.098485348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krb8j,Uid:a1b87d20-949f-4f7a-b970-8ac419f33500,Namespace:kube-system,Attempt:0,} returns sandbox id \"d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b\"" Jan 13 23:38:23.101549 containerd[2052]: time="2026-01-13T23:38:23.101456216Z" level=info msg="CreateContainer within sandbox \"d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:38:23.128915 containerd[2052]: time="2026-01-13T23:38:23.128877905Z" level=info msg="Container bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:38:23.147087 containerd[2052]: time="2026-01-13T23:38:23.146996441Z" level=info msg="CreateContainer within sandbox \"d37c3118534bbd47f53ae57ba7755311282464ced07e169653977515c70d5d0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9\"" Jan 13 23:38:23.148564 containerd[2052]: time="2026-01-13T23:38:23.148518580Z" level=info msg="StartContainer for \"bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9\"" Jan 13 23:38:23.149739 containerd[2052]: time="2026-01-13T23:38:23.149697076Z" level=info msg="connecting to shim bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9" address="unix:///run/containerd/s/35c8210606cc93256c7a5897752063679916d8aef5a2379005d7b14db5d0060b" protocol=ttrpc version=3 Jan 13 23:38:23.167556 systemd[1]: Started cri-containerd-bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9.scope - libcontainer container bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9. Jan 13 23:38:23.196460 containerd[2052]: time="2026-01-13T23:38:23.196394756Z" level=info msg="StartContainer for \"bcc3a4b79ce5c72ac0b1730d8a2892a75e6cd01ed642253bfcaac7bfa33053b9\" returns successfully" Jan 13 23:38:24.079962 kubelet[3534]: I0113 23:38:24.078949 3534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-krb8j" podStartSLOduration=23.07893479 podStartE2EDuration="23.07893479s" podCreationTimestamp="2026-01-13 23:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:38:24.064200615 +0000 UTC m=+29.270339750" watchObservedRunningTime="2026-01-13 23:38:24.07893479 +0000 UTC m=+29.285073917" Jan 13 23:38:24.969529 systemd-networkd[1637]: veth4890a40d: Gained IPv6LL Jan 13 23:39:53.122051 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:35184.service - OpenSSH per-connection server daemon (10.200.16.10:35184). Jan 13 23:39:53.540086 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 35184 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:39:53.541424 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:53.545318 systemd-logind[2030]: New session 9 of user core. Jan 13 23:39:53.554531 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 23:39:53.822086 sshd[4815]: Connection closed by 10.200.16.10 port 35184 Jan 13 23:39:53.821761 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:53.826703 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:35184.service: Deactivated successfully. Jan 13 23:39:53.828938 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 23:39:53.830289 systemd-logind[2030]: Session 9 logged out. Waiting for processes to exit. Jan 13 23:39:53.831379 systemd-logind[2030]: Removed session 9. Jan 13 23:39:58.908164 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:35188.service - OpenSSH per-connection server daemon (10.200.16.10:35188). Jan 13 23:39:59.298115 sshd[4851]: Accepted publickey for core from 10.200.16.10 port 35188 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:39:59.299747 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:39:59.303611 systemd-logind[2030]: New session 10 of user core. Jan 13 23:39:59.311683 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 23:39:59.556278 sshd[4855]: Connection closed by 10.200.16.10 port 35188 Jan 13 23:39:59.557216 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jan 13 23:39:59.560579 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:35188.service: Deactivated successfully. Jan 13 23:39:59.562038 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 23:39:59.563924 systemd-logind[2030]: Session 10 logged out. Waiting for processes to exit. Jan 13 23:39:59.566300 systemd-logind[2030]: Removed session 10. Jan 13 23:40:04.646704 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:39772.service - OpenSSH per-connection server daemon (10.200.16.10:39772). Jan 13 23:40:05.071439 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 39772 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:05.072739 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:05.077407 systemd-logind[2030]: New session 11 of user core. Jan 13 23:40:05.082539 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 23:40:05.346168 sshd[4915]: Connection closed by 10.200.16.10 port 39772 Jan 13 23:40:05.346835 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:05.351109 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:39772.service: Deactivated successfully. Jan 13 23:40:05.353064 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 23:40:05.354708 systemd-logind[2030]: Session 11 logged out. Waiting for processes to exit. Jan 13 23:40:05.355447 systemd-logind[2030]: Removed session 11. Jan 13 23:40:05.428670 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:39784.service - OpenSSH per-connection server daemon (10.200.16.10:39784). Jan 13 23:40:05.832177 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 39784 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:05.834097 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:05.838516 systemd-logind[2030]: New session 12 of user core. Jan 13 23:40:05.847624 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 23:40:06.123748 sshd[4932]: Connection closed by 10.200.16.10 port 39784 Jan 13 23:40:06.124544 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:06.128306 systemd-logind[2030]: Session 12 logged out. Waiting for processes to exit. Jan 13 23:40:06.128651 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:39784.service: Deactivated successfully. Jan 13 23:40:06.131707 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 23:40:06.133418 systemd-logind[2030]: Removed session 12. Jan 13 23:40:06.211560 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:39794.service - OpenSSH per-connection server daemon (10.200.16.10:39794). Jan 13 23:40:06.608467 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 39794 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:06.609693 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:06.613526 systemd-logind[2030]: New session 13 of user core. Jan 13 23:40:06.623492 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 23:40:06.863603 sshd[4946]: Connection closed by 10.200.16.10 port 39794 Jan 13 23:40:06.864276 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:06.868704 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:39794.service: Deactivated successfully. Jan 13 23:40:06.870713 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 23:40:06.871835 systemd-logind[2030]: Session 13 logged out. Waiting for processes to exit. Jan 13 23:40:06.873351 systemd-logind[2030]: Removed session 13. Jan 13 23:40:11.948199 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:51076.service - OpenSSH per-connection server daemon (10.200.16.10:51076). Jan 13 23:40:12.339727 sshd[4979]: Accepted publickey for core from 10.200.16.10 port 51076 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:12.341062 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:12.345131 systemd-logind[2030]: New session 14 of user core. Jan 13 23:40:12.355518 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 23:40:12.597685 sshd[4983]: Connection closed by 10.200.16.10 port 51076 Jan 13 23:40:12.597372 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:12.601706 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:51076.service: Deactivated successfully. Jan 13 23:40:12.603476 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 23:40:12.604285 systemd-logind[2030]: Session 14 logged out. Waiting for processes to exit. Jan 13 23:40:12.605923 systemd-logind[2030]: Removed session 14. Jan 13 23:40:12.689455 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:51082.service - OpenSSH per-connection server daemon (10.200.16.10:51082). Jan 13 23:40:13.112223 sshd[4995]: Accepted publickey for core from 10.200.16.10 port 51082 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:13.113174 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:13.117488 systemd-logind[2030]: New session 15 of user core. Jan 13 23:40:13.121492 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 23:40:13.439603 sshd[4999]: Connection closed by 10.200.16.10 port 51082 Jan 13 23:40:13.439019 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:13.442557 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:51082.service: Deactivated successfully. Jan 13 23:40:13.444406 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 23:40:13.445925 systemd-logind[2030]: Session 15 logged out. Waiting for processes to exit. Jan 13 23:40:13.447510 systemd-logind[2030]: Removed session 15. Jan 13 23:40:13.520453 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:51096.service - OpenSSH per-connection server daemon (10.200.16.10:51096). Jan 13 23:40:13.908061 sshd[5008]: Accepted publickey for core from 10.200.16.10 port 51096 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:13.909427 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:13.913711 systemd-logind[2030]: New session 16 of user core. Jan 13 23:40:13.920494 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 23:40:14.438107 sshd[5012]: Connection closed by 10.200.16.10 port 51096 Jan 13 23:40:14.438725 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:14.443475 systemd-logind[2030]: Session 16 logged out. Waiting for processes to exit. Jan 13 23:40:14.443711 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:51096.service: Deactivated successfully. Jan 13 23:40:14.445424 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 23:40:14.447890 systemd-logind[2030]: Removed session 16. Jan 13 23:40:14.530629 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:51110.service - OpenSSH per-connection server daemon (10.200.16.10:51110). Jan 13 23:40:14.951459 sshd[5035]: Accepted publickey for core from 10.200.16.10 port 51110 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:14.953725 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:14.957547 systemd-logind[2030]: New session 17 of user core. Jan 13 23:40:14.967473 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 23:40:15.313101 sshd[5054]: Connection closed by 10.200.16.10 port 51110 Jan 13 23:40:15.313788 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:15.318609 systemd-logind[2030]: Session 17 logged out. Waiting for processes to exit. Jan 13 23:40:15.318972 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:51110.service: Deactivated successfully. Jan 13 23:40:15.320730 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 23:40:15.323458 systemd-logind[2030]: Removed session 17. Jan 13 23:40:15.406555 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:51116.service - OpenSSH per-connection server daemon (10.200.16.10:51116). Jan 13 23:40:15.828542 sshd[5063]: Accepted publickey for core from 10.200.16.10 port 51116 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:15.829818 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:15.833967 systemd-logind[2030]: New session 18 of user core. Jan 13 23:40:15.842516 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 23:40:16.103774 sshd[5067]: Connection closed by 10.200.16.10 port 51116 Jan 13 23:40:16.104290 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:16.108694 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:51116.service: Deactivated successfully. Jan 13 23:40:16.110721 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 23:40:16.112156 systemd-logind[2030]: Session 18 logged out. Waiting for processes to exit. Jan 13 23:40:16.113215 systemd-logind[2030]: Removed session 18. Jan 13 23:40:19.237601 update_engine[2033]: I20260113 23:40:19.237542 2033 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 23:40:19.237601 update_engine[2033]: I20260113 23:40:19.237593 2033 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 23:40:19.237997 update_engine[2033]: I20260113 23:40:19.237748 2033 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 23:40:19.238071 update_engine[2033]: I20260113 23:40:19.238042 2033 omaha_request_params.cc:62] Current group set to developer Jan 13 23:40:19.238157 update_engine[2033]: I20260113 23:40:19.238142 2033 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 23:40:19.238157 update_engine[2033]: I20260113 23:40:19.238152 2033 update_attempter.cc:643] Scheduling an action processor start. Jan 13 23:40:19.238191 update_engine[2033]: I20260113 23:40:19.238166 2033 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 23:40:19.238219 update_engine[2033]: I20260113 23:40:19.238206 2033 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 23:40:19.238278 update_engine[2033]: I20260113 23:40:19.238265 2033 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 23:40:19.238278 update_engine[2033]: I20260113 23:40:19.238273 2033 omaha_request_action.cc:272] Request: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238278 update_engine[2033]: Jan 13 23:40:19.238432 update_engine[2033]: I20260113 23:40:19.238278 2033 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 23:40:19.238729 locksmithd[2139]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 23:40:19.239778 update_engine[2033]: I20260113 23:40:19.239568 2033 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 23:40:19.240052 update_engine[2033]: I20260113 23:40:19.240017 2033 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 23:40:19.316002 update_engine[2033]: E20260113 23:40:19.315934 2033 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 13 23:40:19.316137 update_engine[2033]: I20260113 23:40:19.316042 2033 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 23:40:21.184999 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:46372.service - OpenSSH per-connection server daemon (10.200.16.10:46372). Jan 13 23:40:21.574706 sshd[5102]: Accepted publickey for core from 10.200.16.10 port 46372 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:21.575982 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:21.580382 systemd-logind[2030]: New session 19 of user core. Jan 13 23:40:21.588531 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 23:40:21.829280 sshd[5106]: Connection closed by 10.200.16.10 port 46372 Jan 13 23:40:21.829922 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:21.834887 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:46372.service: Deactivated successfully. Jan 13 23:40:21.837082 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 23:40:21.838107 systemd-logind[2030]: Session 19 logged out. Waiting for processes to exit. Jan 13 23:40:21.840124 systemd-logind[2030]: Removed session 19. Jan 13 23:40:26.924718 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:46388.service - OpenSSH per-connection server daemon (10.200.16.10:46388). Jan 13 23:40:27.342297 sshd[5138]: Accepted publickey for core from 10.200.16.10 port 46388 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:27.343317 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:27.347126 systemd-logind[2030]: New session 20 of user core. Jan 13 23:40:27.354494 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 23:40:27.611022 sshd[5142]: Connection closed by 10.200.16.10 port 46388 Jan 13 23:40:27.612528 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:27.615495 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:46388.service: Deactivated successfully. Jan 13 23:40:27.617650 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 23:40:27.619012 systemd-logind[2030]: Session 20 logged out. Waiting for processes to exit. Jan 13 23:40:27.620639 systemd-logind[2030]: Removed session 20. Jan 13 23:40:29.235942 update_engine[2033]: I20260113 23:40:29.235400 2033 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 23:40:29.235942 update_engine[2033]: I20260113 23:40:29.235502 2033 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 23:40:29.235942 update_engine[2033]: I20260113 23:40:29.235826 2033 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 23:40:29.342170 update_engine[2033]: E20260113 23:40:29.342015 2033 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 13 23:40:29.342170 update_engine[2033]: I20260113 23:40:29.342137 2033 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 23:40:32.717390 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:58378.service - OpenSSH per-connection server daemon (10.200.16.10:58378). Jan 13 23:40:33.135446 sshd[5177]: Accepted publickey for core from 10.200.16.10 port 58378 ssh2: RSA SHA256:vpLozeVYXEfLph4uLTbpR5MXktCD3IYoH3cJhTVWYv0 Jan 13 23:40:33.136433 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:40:33.140578 systemd-logind[2030]: New session 21 of user core. Jan 13 23:40:33.146617 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 23:40:33.405507 sshd[5181]: Connection closed by 10.200.16.10 port 58378 Jan 13 23:40:33.406192 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Jan 13 23:40:33.410728 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:58378.service: Deactivated successfully. Jan 13 23:40:33.412766 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 23:40:33.413684 systemd-logind[2030]: Session 21 logged out. Waiting for processes to exit. Jan 13 23:40:33.414879 systemd-logind[2030]: Removed session 21.