Jan 29 16:14:02.895906 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 29 16:14:02.895928 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025
Jan 29 16:14:02.895938 kernel: KASLR enabled
Jan 29 16:14:02.895944 kernel: efi: EFI v2.7 by EDK II
Jan 29 16:14:02.895949 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 
Jan 29 16:14:02.895955 kernel: random: crng init done
Jan 29 16:14:02.895962 kernel: secureboot: Secure boot disabled
Jan 29 16:14:02.895968 kernel: ACPI: Early table checksum verification disabled
Jan 29 16:14:02.895974 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Jan 29 16:14:02.895982 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Jan 29 16:14:02.895988 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.895994 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896000 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896006 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896013 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896021 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896027 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896033 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896039 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 16:14:02.896046 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Jan 29 16:14:02.896052 kernel: NUMA: Failed to initialise from firmware
Jan 29 16:14:02.896059 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 16:14:02.896065 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Jan 29 16:14:02.896071 kernel: Zone ranges:
Jan 29 16:14:02.896077 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 16:14:02.896085 kernel:   DMA32    empty
Jan 29 16:14:02.896091 kernel:   Normal   empty
Jan 29 16:14:02.896097 kernel: Movable zone start for each node
Jan 29 16:14:02.896103 kernel: Early memory node ranges
Jan 29 16:14:02.896110 kernel:   node   0: [mem 0x0000000040000000-0x00000000d967ffff]
Jan 29 16:14:02.896116 kernel:   node   0: [mem 0x00000000d9680000-0x00000000d968ffff]
Jan 29 16:14:02.896122 kernel:   node   0: [mem 0x00000000d9690000-0x00000000d976ffff]
Jan 29 16:14:02.896129 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Jan 29 16:14:02.896135 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Jan 29 16:14:02.896141 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Jan 29 16:14:02.896147 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Jan 29 16:14:02.896154 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Jan 29 16:14:02.896161 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Jan 29 16:14:02.896167 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 16:14:02.896174 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Jan 29 16:14:02.896183 kernel: psci: probing for conduit method from ACPI.
Jan 29 16:14:02.896189 kernel: psci: PSCIv1.1 detected in firmware.
Jan 29 16:14:02.896196 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 29 16:14:02.896204 kernel: psci: Trusted OS migration not required
Jan 29 16:14:02.896211 kernel: psci: SMC Calling Convention v1.1
Jan 29 16:14:02.896217 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 29 16:14:02.896224 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 29 16:14:02.896231 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 29 16:14:02.896237 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Jan 29 16:14:02.896244 kernel: Detected PIPT I-cache on CPU0
Jan 29 16:14:02.896251 kernel: CPU features: detected: GIC system register CPU interface
Jan 29 16:14:02.896257 kernel: CPU features: detected: Hardware dirty bit management
Jan 29 16:14:02.896264 kernel: CPU features: detected: Spectre-v4
Jan 29 16:14:02.896272 kernel: CPU features: detected: Spectre-BHB
Jan 29 16:14:02.896278 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 29 16:14:02.896285 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 29 16:14:02.896292 kernel: CPU features: detected: ARM erratum 1418040
Jan 29 16:14:02.896298 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 29 16:14:02.896305 kernel: alternatives: applying boot alternatives
Jan 29 16:14:02.896312 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac
Jan 29 16:14:02.896320 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 29 16:14:02.896326 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 29 16:14:02.896333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 29 16:14:02.896340 kernel: Fallback order for Node 0: 0 
Jan 29 16:14:02.896348 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Jan 29 16:14:02.896354 kernel: Policy zone: DMA
Jan 29 16:14:02.896361 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 29 16:14:02.896367 kernel: software IO TLB: area num 4.
Jan 29 16:14:02.896374 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Jan 29 16:14:02.896381 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved)
Jan 29 16:14:02.896388 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 29 16:14:02.896395 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 29 16:14:02.896402 kernel: rcu:         RCU event tracing is enabled.
Jan 29 16:14:02.896409 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Jan 29 16:14:02.896416 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 29 16:14:02.896422 kernel:         Tracing variant of Tasks RCU enabled.
Jan 29 16:14:02.896431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 29 16:14:02.896437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 29 16:14:02.896444 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 29 16:14:02.896451 kernel: GICv3: 256 SPIs implemented
Jan 29 16:14:02.896458 kernel: GICv3: 0 Extended SPIs implemented
Jan 29 16:14:02.896464 kernel: Root IRQ handler: gic_handle_irq
Jan 29 16:14:02.896471 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 29 16:14:02.896477 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 29 16:14:02.896484 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 29 16:14:02.896491 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 29 16:14:02.896498 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Jan 29 16:14:02.896507 kernel: GICv3: using LPI property table @0x00000000400f0000
Jan 29 16:14:02.896514 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Jan 29 16:14:02.896521 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 29 16:14:02.896545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 16:14:02.896552 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 29 16:14:02.896559 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 29 16:14:02.896566 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 29 16:14:02.896572 kernel: arm-pv: using stolen time PV
Jan 29 16:14:02.896579 kernel: Console: colour dummy device 80x25
Jan 29 16:14:02.896586 kernel: ACPI: Core revision 20230628
Jan 29 16:14:02.896593 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 29 16:14:02.896602 kernel: pid_max: default: 32768 minimum: 301
Jan 29 16:14:02.896609 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 29 16:14:02.896616 kernel: landlock: Up and running.
Jan 29 16:14:02.896622 kernel: SELinux:  Initializing.
Jan 29 16:14:02.896629 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 16:14:02.896639 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 16:14:02.896647 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 16:14:02.896654 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 16:14:02.896665 kernel: rcu: Hierarchical SRCU implementation.
Jan 29 16:14:02.896674 kernel: rcu:         Max phase no-delay instances is 400.
Jan 29 16:14:02.896681 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 29 16:14:02.896688 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 29 16:14:02.896695 kernel: Remapping and enabling EFI services.
Jan 29 16:14:02.896702 kernel: smp: Bringing up secondary CPUs ...
Jan 29 16:14:02.896709 kernel: Detected PIPT I-cache on CPU1
Jan 29 16:14:02.896716 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 29 16:14:02.896723 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Jan 29 16:14:02.896730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 16:14:02.896738 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 29 16:14:02.896746 kernel: Detected PIPT I-cache on CPU2
Jan 29 16:14:02.896757 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Jan 29 16:14:02.896770 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Jan 29 16:14:02.896780 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 16:14:02.896788 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Jan 29 16:14:02.896795 kernel: Detected PIPT I-cache on CPU3
Jan 29 16:14:02.896802 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Jan 29 16:14:02.896810 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Jan 29 16:14:02.896819 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 16:14:02.896826 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Jan 29 16:14:02.896833 kernel: smp: Brought up 1 node, 4 CPUs
Jan 29 16:14:02.896840 kernel: SMP: Total of 4 processors activated.
Jan 29 16:14:02.896848 kernel: CPU features: detected: 32-bit EL0 Support
Jan 29 16:14:02.896855 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 29 16:14:02.896862 kernel: CPU features: detected: Common not Private translations
Jan 29 16:14:02.896869 kernel: CPU features: detected: CRC32 instructions
Jan 29 16:14:02.896878 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 29 16:14:02.896885 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 29 16:14:02.896892 kernel: CPU features: detected: LSE atomic instructions
Jan 29 16:14:02.896899 kernel: CPU features: detected: Privileged Access Never
Jan 29 16:14:02.896906 kernel: CPU features: detected: RAS Extension Support
Jan 29 16:14:02.896913 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 29 16:14:02.896920 kernel: CPU: All CPU(s) started at EL1
Jan 29 16:14:02.896928 kernel: alternatives: applying system-wide alternatives
Jan 29 16:14:02.896935 kernel: devtmpfs: initialized
Jan 29 16:14:02.896942 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 29 16:14:02.896951 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jan 29 16:14:02.896959 kernel: pinctrl core: initialized pinctrl subsystem
Jan 29 16:14:02.896966 kernel: SMBIOS 3.0.0 present.
Jan 29 16:14:02.896973 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Jan 29 16:14:02.896980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 29 16:14:02.896987 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 29 16:14:02.896994 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 29 16:14:02.897002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 29 16:14:02.897010 kernel: audit: initializing netlink subsys (disabled)
Jan 29 16:14:02.897017 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1
Jan 29 16:14:02.897024 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 29 16:14:02.897032 kernel: cpuidle: using governor menu
Jan 29 16:14:02.897039 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 29 16:14:02.897046 kernel: ASID allocator initialised with 32768 entries
Jan 29 16:14:02.897053 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 29 16:14:02.897060 kernel: Serial: AMBA PL011 UART driver
Jan 29 16:14:02.897067 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 29 16:14:02.897076 kernel: Modules: 0 pages in range for non-PLT usage
Jan 29 16:14:02.897083 kernel: Modules: 509280 pages in range for PLT usage
Jan 29 16:14:02.897090 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 29 16:14:02.897097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 29 16:14:02.897104 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 29 16:14:02.897111 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 29 16:14:02.897117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 29 16:14:02.897124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 29 16:14:02.897131 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 29 16:14:02.897138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 29 16:14:02.897147 kernel: ACPI: Added _OSI(Module Device)
Jan 29 16:14:02.897154 kernel: ACPI: Added _OSI(Processor Device)
Jan 29 16:14:02.897160 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 29 16:14:02.897167 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 29 16:14:02.897174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 29 16:14:02.897181 kernel: ACPI: Interpreter enabled
Jan 29 16:14:02.897188 kernel: ACPI: Using GIC for interrupt routing
Jan 29 16:14:02.897195 kernel: ACPI: MCFG table detected, 1 entries
Jan 29 16:14:02.897202 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 29 16:14:02.897210 kernel: printk: console [ttyAMA0] enabled
Jan 29 16:14:02.897217 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 29 16:14:02.897345 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 29 16:14:02.897418 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 29 16:14:02.897483 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 29 16:14:02.897910 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 29 16:14:02.898003 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 29 16:14:02.898017 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 29 16:14:02.898025 kernel: PCI host bridge to bus 0000:00
Jan 29 16:14:02.898095 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 29 16:14:02.898157 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 29 16:14:02.898215 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 29 16:14:02.898311 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 29 16:14:02.898431 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 29 16:14:02.898536 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Jan 29 16:14:02.898615 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Jan 29 16:14:02.898694 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Jan 29 16:14:02.898763 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 29 16:14:02.898828 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 29 16:14:02.898893 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Jan 29 16:14:02.898959 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Jan 29 16:14:02.899023 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 29 16:14:02.899082 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 29 16:14:02.899140 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 29 16:14:02.899149 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 29 16:14:02.899157 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 29 16:14:02.899164 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 29 16:14:02.899171 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 29 16:14:02.899180 kernel: iommu: Default domain type: Translated
Jan 29 16:14:02.899187 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 29 16:14:02.899194 kernel: efivars: Registered efivars operations
Jan 29 16:14:02.899201 kernel: vgaarb: loaded
Jan 29 16:14:02.899208 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 29 16:14:02.899215 kernel: VFS: Disk quotas dquot_6.6.0
Jan 29 16:14:02.899222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 29 16:14:02.899229 kernel: pnp: PnP ACPI init
Jan 29 16:14:02.899300 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 29 16:14:02.899312 kernel: pnp: PnP ACPI: found 1 devices
Jan 29 16:14:02.899319 kernel: NET: Registered PF_INET protocol family
Jan 29 16:14:02.899326 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 29 16:14:02.899333 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 29 16:14:02.899340 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 29 16:14:02.899348 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 29 16:14:02.899355 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 29 16:14:02.899362 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 29 16:14:02.899369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 16:14:02.899377 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 16:14:02.899384 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 29 16:14:02.899391 kernel: PCI: CLS 0 bytes, default 64
Jan 29 16:14:02.899398 kernel: kvm [1]: HYP mode not available
Jan 29 16:14:02.899405 kernel: Initialise system trusted keyrings
Jan 29 16:14:02.899412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 29 16:14:02.899419 kernel: Key type asymmetric registered
Jan 29 16:14:02.899426 kernel: Asymmetric key parser 'x509' registered
Jan 29 16:14:02.899433 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 29 16:14:02.899441 kernel: io scheduler mq-deadline registered
Jan 29 16:14:02.899448 kernel: io scheduler kyber registered
Jan 29 16:14:02.899455 kernel: io scheduler bfq registered
Jan 29 16:14:02.899463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 29 16:14:02.899469 kernel: ACPI: button: Power Button [PWRB]
Jan 29 16:14:02.899477 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 29 16:14:02.899565 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Jan 29 16:14:02.899577 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 29 16:14:02.899589 kernel: thunder_xcv, ver 1.0
Jan 29 16:14:02.899599 kernel: thunder_bgx, ver 1.0
Jan 29 16:14:02.899606 kernel: nicpf, ver 1.0
Jan 29 16:14:02.899613 kernel: nicvf, ver 1.0
Jan 29 16:14:02.899701 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 29 16:14:02.899770 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:14:02 UTC (1738167242)
Jan 29 16:14:02.899779 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 29 16:14:02.899787 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 29 16:14:02.899794 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 29 16:14:02.899803 kernel: watchdog: Hard watchdog permanently disabled
Jan 29 16:14:02.899810 kernel: NET: Registered PF_INET6 protocol family
Jan 29 16:14:02.899817 kernel: Segment Routing with IPv6
Jan 29 16:14:02.899824 kernel: In-situ OAM (IOAM) with IPv6
Jan 29 16:14:02.899831 kernel: NET: Registered PF_PACKET protocol family
Jan 29 16:14:02.899838 kernel: Key type dns_resolver registered
Jan 29 16:14:02.899845 kernel: registered taskstats version 1
Jan 29 16:14:02.899853 kernel: Loading compiled-in X.509 certificates
Jan 29 16:14:02.899860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b'
Jan 29 16:14:02.899868 kernel: Key type .fscrypt registered
Jan 29 16:14:02.899875 kernel: Key type fscrypt-provisioning registered
Jan 29 16:14:02.899882 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 29 16:14:02.899889 kernel: ima: Allocated hash algorithm: sha1
Jan 29 16:14:02.899896 kernel: ima: No architecture policies found
Jan 29 16:14:02.899903 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 29 16:14:02.899910 kernel: clk: Disabling unused clocks
Jan 29 16:14:02.899917 kernel: Freeing unused kernel memory: 38336K
Jan 29 16:14:02.899924 kernel: Run /init as init process
Jan 29 16:14:02.899932 kernel:   with arguments:
Jan 29 16:14:02.899939 kernel:     /init
Jan 29 16:14:02.899946 kernel:   with environment:
Jan 29 16:14:02.899953 kernel:     HOME=/
Jan 29 16:14:02.899960 kernel:     TERM=linux
Jan 29 16:14:02.899967 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 29 16:14:02.899974 systemd[1]: Successfully made /usr/ read-only.
Jan 29 16:14:02.899984 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Jan 29 16:14:02.899993 systemd[1]: Detected virtualization kvm.
Jan 29 16:14:02.900001 systemd[1]: Detected architecture arm64.
Jan 29 16:14:02.900008 systemd[1]: Running in initrd.
Jan 29 16:14:02.900015 systemd[1]: No hostname configured, using default hostname.
Jan 29 16:14:02.900023 systemd[1]: Hostname set to <localhost>.
Jan 29 16:14:02.900030 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 16:14:02.900037 systemd[1]: Queued start job for default target initrd.target.
Jan 29 16:14:02.900045 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:14:02.900054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:14:02.900062 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 29 16:14:02.900069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 16:14:02.900077 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 29 16:14:02.900085 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 29 16:14:02.900094 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 29 16:14:02.900103 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 29 16:14:02.900110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:14:02.900118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:14:02.900125 systemd[1]: Reached target paths.target - Path Units.
Jan 29 16:14:02.900137 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 16:14:02.900145 systemd[1]: Reached target swap.target - Swaps.
Jan 29 16:14:02.900155 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 16:14:02.900166 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 16:14:02.900175 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 16:14:02.900185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 29 16:14:02.900193 systemd[1]: Listening on systemd-journald.socket - Journal Sockets.
Jan 29 16:14:02.900200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:14:02.900208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:14:02.900216 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:14:02.900223 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 16:14:02.900230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 29 16:14:02.900238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 16:14:02.900247 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 29 16:14:02.900254 systemd[1]: Starting systemd-fsck-usr.service...
Jan 29 16:14:02.900262 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 16:14:02.900269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 16:14:02.900277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:14:02.900284 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 29 16:14:02.900292 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:14:02.900301 systemd[1]: Finished systemd-fsck-usr.service.
Jan 29 16:14:02.900324 systemd-journald[238]: Collecting audit messages is disabled.
Jan 29 16:14:02.900344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 29 16:14:02.900352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:14:02.900360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 29 16:14:02.900368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:14:02.900375 kernel: Bridge firewalling registered
Jan 29 16:14:02.900383 systemd-journald[238]: Journal started
Jan 29 16:14:02.900401 systemd-journald[238]: Runtime Journal (/run/log/journal/9bb2548972e34b268634dad43efd74f1) is 5.9M, max 47.3M, 41.4M free.
Jan 29 16:14:02.882567 systemd-modules-load[240]: Inserted module 'overlay'
Jan 29 16:14:02.902236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 16:14:02.900631 systemd-modules-load[240]: Inserted module 'br_netfilter'
Jan 29 16:14:02.904242 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 16:14:02.905198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:14:02.908326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:14:02.909650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 16:14:02.911575 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 16:14:02.919261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:14:02.920871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:14:02.923278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:14:02.924875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:14:02.935766 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 29 16:14:02.937561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 16:14:02.944904 dracut-cmdline[279]: dracut-dracut-053
Jan 29 16:14:02.947220 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac
Jan 29 16:14:02.971754 systemd-resolved[282]: Positive Trust Anchors:
Jan 29 16:14:02.971770 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 16:14:02.971800 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 16:14:02.976401 systemd-resolved[282]: Defaulting to hostname 'linux'.
Jan 29 16:14:02.977560 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 16:14:02.978616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:14:03.011575 kernel: SCSI subsystem initialized
Jan 29 16:14:03.016543 kernel: Loading iSCSI transport class v2.0-870.
Jan 29 16:14:03.023570 kernel: iscsi: registered transport (tcp)
Jan 29 16:14:03.035799 kernel: iscsi: registered transport (qla4xxx)
Jan 29 16:14:03.035820 kernel: QLogic iSCSI HBA Driver
Jan 29 16:14:03.075874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 29 16:14:03.085710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 29 16:14:03.101543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 29 16:14:03.101576 kernel: device-mapper: uevent: version 1.0.3
Jan 29 16:14:03.101586 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 29 16:14:03.148557 kernel: raid6: neonx8   gen() 15736 MB/s
Jan 29 16:14:03.165555 kernel: raid6: neonx4   gen() 15739 MB/s
Jan 29 16:14:03.182557 kernel: raid6: neonx2   gen() 13154 MB/s
Jan 29 16:14:03.199545 kernel: raid6: neonx1   gen() 10492 MB/s
Jan 29 16:14:03.216540 kernel: raid6: int64x8  gen()  6791 MB/s
Jan 29 16:14:03.233548 kernel: raid6: int64x4  gen()  7343 MB/s
Jan 29 16:14:03.250541 kernel: raid6: int64x2  gen()  6108 MB/s
Jan 29 16:14:03.267548 kernel: raid6: int64x1  gen()  5052 MB/s
Jan 29 16:14:03.267573 kernel: raid6: using algorithm neonx4 gen() 15739 MB/s
Jan 29 16:14:03.284543 kernel: raid6: .... xor() 12345 MB/s, rmw enabled
Jan 29 16:14:03.284556 kernel: raid6: using neon recovery algorithm
Jan 29 16:14:03.289727 kernel: xor: measuring software checksum speed
Jan 29 16:14:03.289755 kernel:    8regs           : 21647 MB/sec
Jan 29 16:14:03.289772 kernel:    32regs          : 21670 MB/sec
Jan 29 16:14:03.290642 kernel:    arm64_neon      : 27860 MB/sec
Jan 29 16:14:03.290662 kernel: xor: using function: arm64_neon (27860 MB/sec)
Jan 29 16:14:03.341551 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 29 16:14:03.351597 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 16:14:03.363758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:14:03.378552 systemd-udevd[465]: Using default interface naming scheme 'v255'.
Jan 29 16:14:03.382214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:14:03.390670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 29 16:14:03.401315 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation
Jan 29 16:14:03.426296 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 16:14:03.448762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 16:14:03.486508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:14:03.495667 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 29 16:14:03.510591 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 29 16:14:03.511976 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 16:14:03.513488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:14:03.514420 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 16:14:03.521656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 29 16:14:03.532494 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 16:14:03.535853 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Jan 29 16:14:03.540979 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Jan 29 16:14:03.541879 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 29 16:14:03.541897 kernel: GPT:9289727 != 19775487
Jan 29 16:14:03.541907 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 29 16:14:03.541916 kernel: GPT:9289727 != 19775487
Jan 29 16:14:03.541935 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 29 16:14:03.541946 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:14:03.544669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 16:14:03.544774 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:14:03.551568 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:14:03.553556 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 16:14:03.553800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:14:03.556461 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:14:03.566788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:14:03.572574 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522)
Jan 29 16:14:03.578584 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (518)
Jan 29 16:14:03.582566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:14:03.595002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Jan 29 16:14:03.602089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Jan 29 16:14:03.609232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 16:14:03.615474 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Jan 29 16:14:03.616772 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Jan 29 16:14:03.627729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 29 16:14:03.629319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 16:14:03.634372 disk-uuid[555]: Primary Header is updated.
Jan 29 16:14:03.634372 disk-uuid[555]: Secondary Entries is updated.
Jan 29 16:14:03.634372 disk-uuid[555]: Secondary Header is updated.
Jan 29 16:14:03.642803 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:14:03.652510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:14:04.651549 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 16:14:04.651663 disk-uuid[556]: The operation has completed successfully.
Jan 29 16:14:04.680028 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 29 16:14:04.680122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 29 16:14:04.713680 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 29 16:14:04.716234 sh[575]: Success
Jan 29 16:14:04.731555 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 29 16:14:04.756909 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 29 16:14:04.764855 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 29 16:14:04.766183 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 29 16:14:04.775005 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a
Jan 29 16:14:04.775037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 29 16:14:04.775047 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 29 16:14:04.776535 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 29 16:14:04.776549 kernel: BTRFS info (device dm-0): using free space tree
Jan 29 16:14:04.780739 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 29 16:14:04.781506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 29 16:14:04.793724 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 29 16:14:04.795001 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 29 16:14:04.803817 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9
Jan 29 16:14:04.803854 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 16:14:04.803865 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:14:04.805602 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:14:04.813548 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9
Jan 29 16:14:04.818186 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 29 16:14:04.826716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 29 16:14:04.871814 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 29 16:14:04.883774 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 16:14:04.895769 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 16:14:04.920973 ignition[669]: Ignition 2.20.0
Jan 29 16:14:04.920985 ignition[669]: Stage: fetch-offline
Jan 29 16:14:04.921017 ignition[669]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:04.921026 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:04.923009 systemd-networkd[767]: lo: Link UP
Jan 29 16:14:04.921196 ignition[669]: parsed url from cmdline: ""
Jan 29 16:14:04.923012 systemd-networkd[767]: lo: Gained carrier
Jan 29 16:14:04.921200 ignition[669]: no config URL provided
Jan 29 16:14:04.923797 systemd-networkd[767]: Enumeration completed
Jan 29 16:14:04.921205 ignition[669]: reading system config file "/usr/lib/ignition/user.ign"
Jan 29 16:14:04.924190 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:14:04.921212 ignition[669]: no config at "/usr/lib/ignition/user.ign"
Jan 29 16:14:04.924194 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 16:14:04.921233 ignition[669]: op(1): [started]  loading QEMU firmware config module
Jan 29 16:14:04.924309 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 16:14:04.921237 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg"
Jan 29 16:14:04.924783 systemd-networkd[767]: eth0: Link UP
Jan 29 16:14:04.931103 ignition[669]: op(1): [finished] loading QEMU firmware config module
Jan 29 16:14:04.924786 systemd-networkd[767]: eth0: Gained carrier
Jan 29 16:14:04.924791 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:14:04.925784 systemd[1]: Reached target network.target - Network.
Jan 29 16:14:04.935563 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 16:14:04.956665 ignition[669]: parsing config with SHA512: 308029906be6e02cc8d533c61a38cd77d78b93982bd7d8181b1d5ba34d674e4345d198273241eb326a4e56d9840cf179d56384cb836078bbcd59992ac5825622
Jan 29 16:14:04.962043 unknown[669]: fetched base config from "system"
Jan 29 16:14:04.962058 unknown[669]: fetched user config from "qemu"
Jan 29 16:14:04.962834 ignition[669]: fetch-offline: fetch-offline passed
Jan 29 16:14:04.962940 ignition[669]: Ignition finished successfully
Jan 29 16:14:04.964985 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 16:14:04.966556 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Jan 29 16:14:04.974746 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 29 16:14:04.986646 ignition[778]: Ignition 2.20.0
Jan 29 16:14:04.986663 ignition[778]: Stage: kargs
Jan 29 16:14:04.986813 ignition[778]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:04.986824 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:04.987694 ignition[778]: kargs: kargs passed
Jan 29 16:14:04.987738 ignition[778]: Ignition finished successfully
Jan 29 16:14:04.989593 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 29 16:14:04.995729 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 29 16:14:05.004925 ignition[787]: Ignition 2.20.0
Jan 29 16:14:05.004934 ignition[787]: Stage: disks
Jan 29 16:14:05.005074 ignition[787]: no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:05.007721 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 29 16:14:05.005083 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:05.008815 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 29 16:14:05.005894 ignition[787]: disks: disks passed
Jan 29 16:14:05.009641 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 29 16:14:05.005937 ignition[787]: Ignition finished successfully
Jan 29 16:14:05.011179 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 16:14:05.012517 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 16:14:05.013922 systemd[1]: Reached target basic.target - Basic System.
Jan 29 16:14:05.015706 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 29 16:14:05.027884 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 29 16:14:05.032014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 29 16:14:05.039660 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 29 16:14:05.080453 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 29 16:14:05.081610 kernel: EXT4-fs (vda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none.
Jan 29 16:14:05.081459 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 29 16:14:05.090629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 16:14:05.092505 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 29 16:14:05.093638 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 29 16:14:05.093686 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 29 16:14:05.093709 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 16:14:05.099564 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806)
Jan 29 16:14:05.098796 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 29 16:14:05.102704 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9
Jan 29 16:14:05.102723 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 16:14:05.102738 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:14:05.101636 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 29 16:14:05.105546 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:14:05.106445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 16:14:05.142318 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory
Jan 29 16:14:05.146668 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory
Jan 29 16:14:05.150365 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory
Jan 29 16:14:05.154119 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 29 16:14:05.222252 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 29 16:14:05.230626 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 29 16:14:05.231928 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 29 16:14:05.236544 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9
Jan 29 16:14:05.251672 ignition[918]: INFO     : Ignition 2.20.0
Jan 29 16:14:05.251672 ignition[918]: INFO     : Stage: mount
Jan 29 16:14:05.250760 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 29 16:14:05.253561 ignition[918]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:05.253561 ignition[918]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:05.253561 ignition[918]: INFO     : mount: mount passed
Jan 29 16:14:05.253561 ignition[918]: INFO     : Ignition finished successfully
Jan 29 16:14:05.253355 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 29 16:14:05.262642 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 29 16:14:05.873454 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 29 16:14:05.886761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 16:14:05.893069 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (932)
Jan 29 16:14:05.893100 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9
Jan 29 16:14:05.893111 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 16:14:05.894546 kernel: BTRFS info (device vda6): using free space tree
Jan 29 16:14:05.896542 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 16:14:05.897319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 16:14:05.912960 ignition[949]: INFO     : Ignition 2.20.0
Jan 29 16:14:05.912960 ignition[949]: INFO     : Stage: files
Jan 29 16:14:05.914178 ignition[949]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:05.914178 ignition[949]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:05.914178 ignition[949]: DEBUG    : files: compiled without relabeling support, skipping
Jan 29 16:14:05.916657 ignition[949]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 29 16:14:05.916657 ignition[949]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 29 16:14:05.919319 ignition[949]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 29 16:14:05.920498 ignition[949]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 29 16:14:05.921782 unknown[949]: wrote ssh authorized keys file for user: core
Jan 29 16:14:05.922721 ignition[949]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 29 16:14:05.924333 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 29 16:14:05.925825 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Jan 29 16:14:06.034804 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Jan 29 16:14:06.397769 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 29 16:14:06.399828 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Jan 29 16:14:06.529707 systemd-networkd[767]: eth0: Gained IPv6LL
Jan 29 16:14:06.701502 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Jan 29 16:14:06.963591 ignition[949]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 29 16:14:06.963591 ignition[949]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Jan 29 16:14:06.966451 ignition[949]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Jan 29 16:14:06.982390 ignition[949]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 16:14:06.985093 ignition[949]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 29 16:14:06.986238 ignition[949]: INFO     : files: files passed
Jan 29 16:14:06.986238 ignition[949]: INFO     : Ignition finished successfully
Jan 29 16:14:06.989560 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 29 16:14:07.005783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 29 16:14:07.008717 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 29 16:14:07.011271 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 29 16:14:07.011365 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 29 16:14:07.014554 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory
Jan 29 16:14:07.017392 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:14:07.017392 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:14:07.019623 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 16:14:07.019892 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 16:14:07.021901 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 29 16:14:07.035689 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 29 16:14:07.052085 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 29 16:14:07.052180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 29 16:14:07.054838 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 29 16:14:07.056366 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 29 16:14:07.057692 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 29 16:14:07.065660 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 29 16:14:07.076173 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 16:14:07.078168 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 29 16:14:07.087731 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:14:07.088628 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:14:07.090257 systemd[1]: Stopped target timers.target - Timer Units.
Jan 29 16:14:07.091576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 29 16:14:07.091693 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 16:14:07.093711 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 29 16:14:07.095167 systemd[1]: Stopped target basic.target - Basic System.
Jan 29 16:14:07.096565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 29 16:14:07.097942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 16:14:07.099414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 29 16:14:07.100911 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 29 16:14:07.102264 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 16:14:07.103787 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 29 16:14:07.105220 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 29 16:14:07.106490 systemd[1]: Stopped target swap.target - Swaps.
Jan 29 16:14:07.107628 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 29 16:14:07.107747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 16:14:07.109619 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:14:07.111019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:14:07.112532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 29 16:14:07.115635 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:14:07.117462 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 29 16:14:07.117580 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 29 16:14:07.119651 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 29 16:14:07.119765 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 16:14:07.121318 systemd[1]: Stopped target paths.target - Path Units.
Jan 29 16:14:07.122502 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 29 16:14:07.127578 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:14:07.128509 systemd[1]: Stopped target slices.target - Slice Units.
Jan 29 16:14:07.130081 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 29 16:14:07.131245 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 29 16:14:07.131321 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 16:14:07.132616 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 29 16:14:07.132698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 16:14:07.133941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 29 16:14:07.134050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 16:14:07.135353 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 29 16:14:07.135447 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 29 16:14:07.148677 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 29 16:14:07.149336 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 29 16:14:07.149453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:14:07.154810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 29 16:14:07.155680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 29 16:14:07.155812 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:14:07.157004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 29 16:14:07.161419 ignition[1005]: INFO     : Ignition 2.20.0
Jan 29 16:14:07.161419 ignition[1005]: INFO     : Stage: umount
Jan 29 16:14:07.161419 ignition[1005]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 16:14:07.161419 ignition[1005]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 16:14:07.161419 ignition[1005]: INFO     : umount: umount passed
Jan 29 16:14:07.161419 ignition[1005]: INFO     : Ignition finished successfully
Jan 29 16:14:07.157106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 16:14:07.162008 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 29 16:14:07.162088 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 29 16:14:07.163924 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 29 16:14:07.164039 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 29 16:14:07.166293 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 29 16:14:07.166742 systemd[1]: Stopped target network.target - Network.
Jan 29 16:14:07.167632 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 29 16:14:07.167700 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 29 16:14:07.169710 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 29 16:14:07.169794 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 29 16:14:07.170544 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 29 16:14:07.170584 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 29 16:14:07.172259 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 29 16:14:07.172306 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 29 16:14:07.173735 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 29 16:14:07.175007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 29 16:14:07.178356 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 29 16:14:07.178465 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 29 16:14:07.181603 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully.
Jan 29 16:14:07.181848 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 29 16:14:07.181887 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:14:07.185675 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 29 16:14:07.185919 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 29 16:14:07.186038 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 29 16:14:07.190278 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully.
Jan 29 16:14:07.190736 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 29 16:14:07.190791 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:14:07.202655 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 29 16:14:07.203510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 29 16:14:07.203604 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 16:14:07.205061 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 16:14:07.205106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:14:07.208035 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 16:14:07.208083 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:14:07.209662 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:14:07.213059 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 29 16:14:07.218480 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 29 16:14:07.218757 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 29 16:14:07.220413 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 29 16:14:07.220515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:14:07.222425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 29 16:14:07.222473 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:14:07.223973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 29 16:14:07.224004 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:14:07.225299 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 29 16:14:07.225343 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 16:14:07.227272 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 29 16:14:07.227340 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 29 16:14:07.229729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 16:14:07.229780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 16:14:07.246708 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 29 16:14:07.247516 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 29 16:14:07.247588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:14:07.249950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 16:14:07.249990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:14:07.252630 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 29 16:14:07.252743 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 29 16:14:07.253948 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 29 16:14:07.254047 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 29 16:14:07.255876 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 29 16:14:07.257404 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 29 16:14:07.257467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 29 16:14:07.259690 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 29 16:14:07.268022 systemd[1]: Switching root.
Jan 29 16:14:07.288253 systemd-journald[238]: Journal stopped
Jan 29 16:14:07.985402 systemd-journald[238]: Received SIGTERM from PID 1 (systemd).
Jan 29 16:14:07.985456 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 16:14:07.985468 kernel: SELinux:  policy capability open_perms=1
Jan 29 16:14:07.985477 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 16:14:07.985486 kernel: SELinux:  policy capability always_check_network=0
Jan 29 16:14:07.985497 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 16:14:07.985507 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 16:14:07.985516 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 29 16:14:07.985548 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 29 16:14:07.985558 kernel: audit: type=1403 audit(1738167247.421:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 29 16:14:07.985569 systemd[1]: Successfully loaded SELinux policy in 29.577ms.
Jan 29 16:14:07.985587 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.577ms.
Jan 29 16:14:07.985598 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Jan 29 16:14:07.985609 systemd[1]: Detected virtualization kvm.
Jan 29 16:14:07.985622 systemd[1]: Detected architecture arm64.
Jan 29 16:14:07.985633 systemd[1]: Detected first boot.
Jan 29 16:14:07.985650 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 16:14:07.985662 zram_generator::config[1053]: No configuration found.
Jan 29 16:14:07.985673 kernel: NET: Registered PF_VSOCK protocol family
Jan 29 16:14:07.985682 systemd[1]: Populated /etc with preset unit settings.
Jan 29 16:14:07.985693 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully.
Jan 29 16:14:07.985703 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 29 16:14:07.985715 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 29 16:14:07.985726 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 29 16:14:07.985736 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 29 16:14:07.985746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 29 16:14:07.985757 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 29 16:14:07.985767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 29 16:14:07.985777 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 29 16:14:07.985789 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 29 16:14:07.985801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 29 16:14:07.985811 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 29 16:14:07.985821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 16:14:07.985832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 16:14:07.985842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 29 16:14:07.985852 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 29 16:14:07.985862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 29 16:14:07.985872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 16:14:07.985882 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 29 16:14:07.985894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 16:14:07.985904 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 29 16:14:07.985914 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 29 16:14:07.985924 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 29 16:14:07.985934 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 29 16:14:07.985944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 16:14:07.985954 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 16:14:07.985964 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 16:14:07.985976 systemd[1]: Reached target swap.target - Swaps.
Jan 29 16:14:07.985987 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 29 16:14:07.985997 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 29 16:14:07.986008 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption.
Jan 29 16:14:07.986019 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 16:14:07.986029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 16:14:07.986039 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 16:14:07.986049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 29 16:14:07.986063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 29 16:14:07.986075 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 29 16:14:07.986085 systemd[1]: Mounting media.mount - External Media Directory...
Jan 29 16:14:07.986095 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 29 16:14:07.986105 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 29 16:14:07.986115 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 29 16:14:07.986126 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 29 16:14:07.986136 systemd[1]: Reached target machines.target - Containers.
Jan 29 16:14:07.986147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 29 16:14:07.986156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:14:07.986168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 16:14:07.986178 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 29 16:14:07.986188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:14:07.986198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 16:14:07.986208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:14:07.986218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 29 16:14:07.986229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:14:07.986239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 29 16:14:07.986251 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 29 16:14:07.986262 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 29 16:14:07.986271 kernel: fuse: init (API version 7.39)
Jan 29 16:14:07.986280 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 29 16:14:07.986290 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 29 16:14:07.986301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:14:07.986311 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 16:14:07.986321 kernel: ACPI: bus type drm_connector registered
Jan 29 16:14:07.986331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 16:14:07.986342 kernel: loop: module loaded
Jan 29 16:14:07.986352 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 29 16:14:07.986363 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 29 16:14:07.986373 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials...
Jan 29 16:14:07.986383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 16:14:07.986395 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 29 16:14:07.986405 systemd[1]: Stopped verity-setup.service.
Jan 29 16:14:07.986416 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 29 16:14:07.986426 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 29 16:14:07.986435 systemd[1]: Mounted media.mount - External Media Directory.
Jan 29 16:14:07.986462 systemd-journald[1125]: Collecting audit messages is disabled.
Jan 29 16:14:07.986486 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 29 16:14:07.986499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 29 16:14:07.986510 systemd-journald[1125]: Journal started
Jan 29 16:14:07.986540 systemd-journald[1125]: Runtime Journal (/run/log/journal/9bb2548972e34b268634dad43efd74f1) is 5.9M, max 47.3M, 41.4M free.
Jan 29 16:14:07.797034 systemd[1]: Queued start job for default target multi-user.target.
Jan 29 16:14:07.809481 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Jan 29 16:14:07.809889 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 29 16:14:07.989009 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 16:14:07.989609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 29 16:14:07.990687 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 29 16:14:07.991843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 16:14:07.993049 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 16:14:07.993212 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 29 16:14:07.994343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:14:07.994507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:14:07.995631 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 16:14:07.995813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 16:14:07.996949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:14:07.997113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:14:07.998280 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 29 16:14:07.998442 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 29 16:14:07.999774 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:14:07.999958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:14:08.001062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 16:14:08.002299 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 29 16:14:08.003507 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 29 16:14:08.004757 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials.
Jan 29 16:14:08.017032 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 29 16:14:08.025667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 29 16:14:08.027468 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 29 16:14:08.028339 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 29 16:14:08.028377 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 16:14:08.030109 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management.
Jan 29 16:14:08.031993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 29 16:14:08.033776 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 29 16:14:08.034671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:14:08.035778 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 29 16:14:08.037372 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 29 16:14:08.038423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 16:14:08.041709 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 29 16:14:08.042592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 16:14:08.045208 systemd-journald[1125]: Time spent on flushing to /var/log/journal/9bb2548972e34b268634dad43efd74f1 is 20.524ms for 864 entries.
Jan 29 16:14:08.045208 systemd-journald[1125]: System Journal (/var/log/journal/9bb2548972e34b268634dad43efd74f1) is 8M, max 195.6M, 187.6M free.
Jan 29 16:14:08.079149 systemd-journald[1125]: Received client request to flush runtime journal.
Jan 29 16:14:08.079199 kernel: loop0: detected capacity change from 0 to 113512
Jan 29 16:14:08.079216 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 29 16:14:08.044512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 16:14:08.051626 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 29 16:14:08.057763 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 29 16:14:08.061652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 16:14:08.062825 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 29 16:14:08.067407 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 29 16:14:08.068924 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 29 16:14:08.070889 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 29 16:14:08.075478 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 29 16:14:08.080743 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk...
Jan 29 16:14:08.084774 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 29 16:14:08.086273 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 29 16:14:08.088449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 16:14:08.091676 kernel: loop1: detected capacity change from 0 to 194096
Jan 29 16:14:08.097907 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 29 16:14:08.105819 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk.
Jan 29 16:14:08.112609 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 29 16:14:08.119685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 16:14:08.136150 systemd-tmpfiles[1190]: ACLs are not supported, ignoring.
Jan 29 16:14:08.136171 systemd-tmpfiles[1190]: ACLs are not supported, ignoring.
Jan 29 16:14:08.140468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 16:14:08.142774 kernel: loop2: detected capacity change from 0 to 123192
Jan 29 16:14:08.176552 kernel: loop3: detected capacity change from 0 to 113512
Jan 29 16:14:08.181546 kernel: loop4: detected capacity change from 0 to 194096
Jan 29 16:14:08.187541 kernel: loop5: detected capacity change from 0 to 123192
Jan 29 16:14:08.191421 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Jan 29 16:14:08.191843 (sd-merge)[1194]: Merged extensions into '/usr'.
Jan 29 16:14:08.194833 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 29 16:14:08.194846 systemd[1]: Reloading...
Jan 29 16:14:08.245557 zram_generator::config[1219]: No configuration found.
Jan 29 16:14:08.333933 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 29 16:14:08.341513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:14:08.390579 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 29 16:14:08.390707 systemd[1]: Reloading finished in 195 ms.
Jan 29 16:14:08.411482 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 29 16:14:08.413958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 29 16:14:08.430771 systemd[1]: Starting ensure-sysext.service...
Jan 29 16:14:08.432338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 16:14:08.446186 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)...
Jan 29 16:14:08.446203 systemd[1]: Reloading...
Jan 29 16:14:08.450295 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 29 16:14:08.450506 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 29 16:14:08.451155 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 29 16:14:08.451364 systemd-tmpfiles[1257]: ACLs are not supported, ignoring.
Jan 29 16:14:08.451418 systemd-tmpfiles[1257]: ACLs are not supported, ignoring.
Jan 29 16:14:08.453921 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 16:14:08.453933 systemd-tmpfiles[1257]: Skipping /boot
Jan 29 16:14:08.462311 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 16:14:08.462331 systemd-tmpfiles[1257]: Skipping /boot
Jan 29 16:14:08.494612 zram_generator::config[1288]: No configuration found.
Jan 29 16:14:08.580281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:14:08.630604 systemd[1]: Reloading finished in 184 ms.
Jan 29 16:14:08.644071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 29 16:14:08.660621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 16:14:08.667895 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 29 16:14:08.669907 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 29 16:14:08.672208 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 29 16:14:08.678834 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 16:14:08.687507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 16:14:08.693212 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 29 16:14:08.698273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:14:08.701504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:14:08.705874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:14:08.708933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:14:08.710252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:14:08.710565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:14:08.713835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 29 16:14:08.715383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:14:08.715560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:14:08.717046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:14:08.717186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:14:08.718827 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:14:08.718967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:14:08.725755 systemd-udevd[1332]: Using default interface naming scheme 'v255'.
Jan 29 16:14:08.726491 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 29 16:14:08.729230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:14:08.737151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:14:08.739279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:14:08.741393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:14:08.742352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:14:08.742462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:14:08.743923 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 29 16:14:08.745950 augenrules[1359]: No rules
Jan 29 16:14:08.746554 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 29 16:14:08.749790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 16:14:08.753113 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 16:14:08.753292 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 29 16:14:08.754501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 29 16:14:08.755874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:14:08.756030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:14:08.757851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:14:08.757999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:14:08.761117 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:14:08.761264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:14:08.762734 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 29 16:14:08.787881 systemd[1]: Finished ensure-sysext.service.
Jan 29 16:14:08.793655 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 29 16:14:08.804572 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383)
Jan 29 16:14:08.807687 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 29 16:14:08.808677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 16:14:08.816503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 16:14:08.820194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 16:14:08.823275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 16:14:08.827698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 16:14:08.828722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 16:14:08.828764 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Jan 29 16:14:08.832832 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 16:14:08.841264 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 29 16:14:08.842137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 29 16:14:08.843924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 16:14:08.845574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 16:14:08.846692 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 16:14:08.846841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 16:14:08.848162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 16:14:08.848323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 16:14:08.850845 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 16:14:08.850981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 16:14:08.852019 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 29 16:14:08.868322 augenrules[1397]: /sbin/augenrules: No change
Jan 29 16:14:08.877260 augenrules[1432]: No rules
Jan 29 16:14:08.878829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 16:14:08.880082 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 16:14:08.880310 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 29 16:14:08.890966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 29 16:14:08.891986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 16:14:08.892057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 16:14:08.918588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 29 16:14:08.929760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 16:14:08.938830 systemd-networkd[1408]: lo: Link UP
Jan 29 16:14:08.938838 systemd-networkd[1408]: lo: Gained carrier
Jan 29 16:14:08.939915 systemd-networkd[1408]: Enumeration completed
Jan 29 16:14:08.940057 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 16:14:08.940476 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:14:08.940670 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 16:14:08.947068 systemd-networkd[1408]: eth0: Link UP
Jan 29 16:14:08.947081 systemd-networkd[1408]: eth0: Gained carrier
Jan 29 16:14:08.947095 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 16:14:08.954683 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd...
Jan 29 16:14:08.956924 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 29 16:14:08.957996 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 29 16:14:08.960424 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 29 16:14:08.962295 systemd[1]: Reached target time-set.target - System Time Set.
Jan 29 16:14:08.964359 systemd-resolved[1326]: Positive Trust Anchors:
Jan 29 16:14:08.966200 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 16:14:08.966246 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 16:14:08.967854 systemd-networkd[1408]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 16:14:08.968511 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 29 16:14:08.970850 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection.
Jan 29 16:14:08.973007 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd.
Jan 29 16:14:08.535210 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Jan 29 16:14:08.543631 systemd-journald[1125]: Time jumped backwards, rotating.
Jan 29 16:14:08.535249 systemd-resolved[1326]: Defaulting to hostname 'linux'.
Jan 29 16:14:08.537743 systemd-timesyncd[1410]: Initial clock synchronization to Wed 2025-01-29 16:14:08.535035 UTC.
Jan 29 16:14:08.539048 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 16:14:08.540158 systemd[1]: Reached target network.target - Network.
Jan 29 16:14:08.540950 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 16:14:08.544546 lvm[1450]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 16:14:08.563356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 16:14:08.577889 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 29 16:14:08.579031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 16:14:08.579912 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 16:14:08.580772 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 29 16:14:08.581678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 29 16:14:08.582756 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 29 16:14:08.583653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 29 16:14:08.584564 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 29 16:14:08.585536 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 29 16:14:08.585570 systemd[1]: Reached target paths.target - Path Units.
Jan 29 16:14:08.586211 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 16:14:08.587940 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 29 16:14:08.590039 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 29 16:14:08.593561 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local).
Jan 29 16:14:08.594715 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK).
Jan 29 16:14:08.595694 systemd[1]: Reached target ssh-access.target - SSH Access Available.
Jan 29 16:14:08.601218 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 29 16:14:08.602646 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket.
Jan 29 16:14:08.604611 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 29 16:14:08.605926 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 29 16:14:08.606876 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 16:14:08.607607 systemd[1]: Reached target basic.target - Basic System.
Jan 29 16:14:08.608273 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 29 16:14:08.608304 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 29 16:14:08.609211 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 29 16:14:08.610973 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 29 16:14:08.611784 lvm[1460]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 16:14:08.613558 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 29 16:14:08.615664 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 29 16:14:08.616537 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 29 16:14:08.617647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 29 16:14:08.620553 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jan 29 16:14:08.624230 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 29 16:14:08.627250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 29 16:14:08.631134 jq[1463]: false
Jan 29 16:14:08.631583 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 29 16:14:08.633299 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 29 16:14:08.636825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 29 16:14:08.639925 systemd[1]: Starting update-engine.service - Update Engine...
Jan 29 16:14:08.641675 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 29 16:14:08.646012 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 29 16:14:08.648053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 29 16:14:08.649354 dbus-daemon[1462]: [system] SELinux support is enabled
Jan 29 16:14:08.649911 jq[1479]: true
Jan 29 16:14:08.648223 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 29 16:14:08.648488 systemd[1]: motdgen.service: Deactivated successfully.
Jan 29 16:14:08.648659 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 29 16:14:08.649720 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 29 16:14:08.653049 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 29 16:14:08.653232 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found loop3
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found loop4
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found loop5
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda1
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda2
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda3
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found usr
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda4
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda6
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda7
Jan 29 16:14:08.662362 extend-filesystems[1464]: Found vda9
Jan 29 16:14:08.662362 extend-filesystems[1464]: Checking size of /dev/vda9
Jan 29 16:14:08.666750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 29 16:14:08.677736 tar[1482]: linux-arm64/helm
Jan 29 16:14:08.666806 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 29 16:14:08.667832 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 29 16:14:08.667852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 29 16:14:08.674078 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 29 16:14:08.682435 jq[1483]: true
Jan 29 16:14:08.689946 extend-filesystems[1464]: Resized partition /dev/vda9
Jan 29 16:14:08.697776 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024)
Jan 29 16:14:08.708413 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Jan 29 16:14:08.727364 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 16:14:08.731656 systemd-logind[1472]: New seat seat0.
Jan 29 16:14:08.735480 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 29 16:14:08.744414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1368)
Jan 29 16:14:08.750077 update_engine[1478]: I20250129 16:14:08.749909  1478 main.cc:92] Flatcar Update Engine starting
Jan 29 16:14:08.758439 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Jan 29 16:14:08.760597 systemd[1]: Started update-engine.service - Update Engine.
Jan 29 16:14:08.761751 update_engine[1478]: I20250129 16:14:08.761023  1478 update_check_scheduler.cc:74] Next update check in 4m58s
Jan 29 16:14:08.769741 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Jan 29 16:14:08.769741 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 29 16:14:08.769741 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Jan 29 16:14:08.775092 extend-filesystems[1464]: Resized filesystem in /dev/vda9
Jan 29 16:14:08.780219 bash[1515]: Updated "/home/core/.ssh/authorized_keys"
Jan 29 16:14:08.781577 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 29 16:14:08.783945 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 29 16:14:08.784153 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 29 16:14:08.785586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 29 16:14:08.788497 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 29 16:14:08.831704 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 29 16:14:08.956032 containerd[1485]: time="2025-01-29T16:14:08.955934974Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 29 16:14:08.989221 containerd[1485]: time="2025-01-29T16:14:08.989090054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990465934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990498174Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990531414Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990680374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990701254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990755454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990767134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990962854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990977534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990989494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991074 containerd[1485]: time="2025-01-29T16:14:08.990998174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991295 containerd[1485]: time="2025-01-29T16:14:08.991075054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991295 containerd[1485]: time="2025-01-29T16:14:08.991261974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991987 containerd[1485]: time="2025-01-29T16:14:08.991385454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 16:14:08.991987 containerd[1485]: time="2025-01-29T16:14:08.991602214Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 29 16:14:08.991987 containerd[1485]: time="2025-01-29T16:14:08.991691054Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 29 16:14:08.991987 containerd[1485]: time="2025-01-29T16:14:08.991738454Z" level=info msg="metadata content store policy set" policy=shared
Jan 29 16:14:08.995783 containerd[1485]: time="2025-01-29T16:14:08.995713174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 29 16:14:08.995783 containerd[1485]: time="2025-01-29T16:14:08.995770454Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 29 16:14:08.995783 containerd[1485]: time="2025-01-29T16:14:08.995786214Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 29 16:14:08.995891 containerd[1485]: time="2025-01-29T16:14:08.995806494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 29 16:14:08.995891 containerd[1485]: time="2025-01-29T16:14:08.995821054Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 29 16:14:08.996089 containerd[1485]: time="2025-01-29T16:14:08.996054174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 29 16:14:08.996521 containerd[1485]: time="2025-01-29T16:14:08.996486214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 29 16:14:08.996702 containerd[1485]: time="2025-01-29T16:14:08.996679854Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 29 16:14:08.996736 containerd[1485]: time="2025-01-29T16:14:08.996707774Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 29 16:14:08.996736 containerd[1485]: time="2025-01-29T16:14:08.996722694Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 29 16:14:08.996769 containerd[1485]: time="2025-01-29T16:14:08.996735254Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996769 containerd[1485]: time="2025-01-29T16:14:08.996747974Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996769 containerd[1485]: time="2025-01-29T16:14:08.996760014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996816 containerd[1485]: time="2025-01-29T16:14:08.996772534Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996816 containerd[1485]: time="2025-01-29T16:14:08.996786534Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996816 containerd[1485]: time="2025-01-29T16:14:08.996798094Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996816 containerd[1485]: time="2025-01-29T16:14:08.996809454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996877 containerd[1485]: time="2025-01-29T16:14:08.996819774Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 29 16:14:08.996877 containerd[1485]: time="2025-01-29T16:14:08.996838374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996877 containerd[1485]: time="2025-01-29T16:14:08.996850294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996877 containerd[1485]: time="2025-01-29T16:14:08.996863934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996877 containerd[1485]: time="2025-01-29T16:14:08.996874814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996985 containerd[1485]: time="2025-01-29T16:14:08.996887054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996985 containerd[1485]: time="2025-01-29T16:14:08.996899174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996985 containerd[1485]: time="2025-01-29T16:14:08.996910494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.996985 containerd[1485]: time="2025-01-29T16:14:08.996976334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997048 containerd[1485]: time="2025-01-29T16:14:08.996995054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997048 containerd[1485]: time="2025-01-29T16:14:08.997009854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997048 containerd[1485]: time="2025-01-29T16:14:08.997021574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997048 containerd[1485]: time="2025-01-29T16:14:08.997034174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997048 containerd[1485]: time="2025-01-29T16:14:08.997045334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997124 containerd[1485]: time="2025-01-29T16:14:08.997059094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 29 16:14:08.997124 containerd[1485]: time="2025-01-29T16:14:08.997078934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997124 containerd[1485]: time="2025-01-29T16:14:08.997092014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997124 containerd[1485]: time="2025-01-29T16:14:08.997107134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 29 16:14:08.997346 containerd[1485]: time="2025-01-29T16:14:08.997324894Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 29 16:14:08.997367 containerd[1485]: time="2025-01-29T16:14:08.997353654Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 29 16:14:08.997367 containerd[1485]: time="2025-01-29T16:14:08.997365054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 29 16:14:08.997470 containerd[1485]: time="2025-01-29T16:14:08.997443774Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 29 16:14:08.997470 containerd[1485]: time="2025-01-29T16:14:08.997456534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997470 containerd[1485]: time="2025-01-29T16:14:08.997468374Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 29 16:14:08.997528 containerd[1485]: time="2025-01-29T16:14:08.997477494Z" level=info msg="NRI interface is disabled by configuration."
Jan 29 16:14:08.997528 containerd[1485]: time="2025-01-29T16:14:08.997487294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 29 16:14:08.997950 containerd[1485]: time="2025-01-29T16:14:08.997886894Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 29 16:14:08.997950 containerd[1485]: time="2025-01-29T16:14:08.997946854Z" level=info msg="Connect containerd service"
Jan 29 16:14:08.998070 containerd[1485]: time="2025-01-29T16:14:08.997977414Z" level=info msg="using legacy CRI server"
Jan 29 16:14:08.998070 containerd[1485]: time="2025-01-29T16:14:08.997984054Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 29 16:14:08.998215 containerd[1485]: time="2025-01-29T16:14:08.998199214Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 29 16:14:08.999088 containerd[1485]: time="2025-01-29T16:14:08.999062654Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999328934Z" level=info msg="Start subscribing containerd event"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999382934Z" level=info msg="Start recovering state"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999453494Z" level=info msg="Start event monitor"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999463734Z" level=info msg="Start snapshots syncer"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999473094Z" level=info msg="Start cni network conf syncer for default"
Jan 29 16:14:08.999683 containerd[1485]: time="2025-01-29T16:14:08.999479334Z" level=info msg="Start streaming server"
Jan 29 16:14:08.999887 containerd[1485]: time="2025-01-29T16:14:08.999858654Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 29 16:14:08.999931 containerd[1485]: time="2025-01-29T16:14:08.999915134Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 29 16:14:09.000044 systemd[1]: Started containerd.service - containerd container runtime.
Jan 29 16:14:09.000360 containerd[1485]: time="2025-01-29T16:14:08.999966134Z" level=info msg="containerd successfully booted in 0.045088s"
Jan 29 16:14:09.054369 tar[1482]: linux-arm64/LICENSE
Jan 29 16:14:09.054369 tar[1482]: linux-arm64/README.md
Jan 29 16:14:09.066965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jan 29 16:14:09.553139 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 29 16:14:09.570768 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 29 16:14:09.589672 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 29 16:14:09.594865 systemd[1]: issuegen.service: Deactivated successfully.
Jan 29 16:14:09.596426 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 29 16:14:09.598909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 29 16:14:09.609329 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 29 16:14:09.612117 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 29 16:14:09.614000 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 29 16:14:09.615152 systemd[1]: Reached target getty.target - Login Prompts.
Jan 29 16:14:10.506545 systemd-networkd[1408]: eth0: Gained IPv6LL
Jan 29 16:14:10.508573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 29 16:14:10.510823 systemd[1]: Reached target network-online.target - Network is Online.
Jan 29 16:14:10.525655 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 29 16:14:10.527928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:10.529791 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 29 16:14:10.543933 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 29 16:14:10.544164 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 29 16:14:10.546132 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 29 16:14:10.548734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 29 16:14:11.024481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:11.025825 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 29 16:14:11.030474 systemd[1]: Startup finished in 514ms (kernel) + 4.729s (initrd) + 4.080s (userspace) = 9.324s.
Jan 29 16:14:11.030691 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 29 16:14:11.511842 kubelet[1574]: E0129 16:14:11.511730    1574 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 29 16:14:11.514066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 16:14:11.514214 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 29 16:14:11.514530 systemd[1]: kubelet.service: Consumed 834ms CPU time, 242.9M memory peak.
Jan 29 16:14:15.023769 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 29 16:14:15.024893 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:44062.service - OpenSSH per-connection server daemon (10.0.0.1:44062).
Jan 29 16:14:15.091206 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 44062 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:14:15.093127 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:14:15.102917 systemd-logind[1472]: New session 1 of user core.
Jan 29 16:14:15.103909 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 29 16:14:15.119679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 29 16:14:15.128563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 29 16:14:15.132738 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 29 16:14:15.137334 (systemd)[1593]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 29 16:14:15.139829 systemd-logind[1472]: New session c1 of user core.
Jan 29 16:14:15.247422 systemd[1593]: Queued start job for default target default.target.
Jan 29 16:14:15.257336 systemd[1593]: Created slice app.slice - User Application Slice.
Jan 29 16:14:15.257491 systemd[1593]: Reached target paths.target - Paths.
Jan 29 16:14:15.257592 systemd[1593]: Reached target timers.target - Timers.
Jan 29 16:14:15.258928 systemd[1593]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 29 16:14:15.268144 systemd[1593]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 29 16:14:15.268208 systemd[1593]: Reached target sockets.target - Sockets.
Jan 29 16:14:15.268247 systemd[1593]: Reached target basic.target - Basic System.
Jan 29 16:14:15.268276 systemd[1593]: Reached target default.target - Main User Target.
Jan 29 16:14:15.268302 systemd[1593]: Startup finished in 122ms.
Jan 29 16:14:15.268474 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 29 16:14:15.269865 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 29 16:14:15.329111 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068).
Jan 29 16:14:15.373397 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:14:15.374582 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:14:15.378893 systemd-logind[1472]: New session 2 of user core.
Jan 29 16:14:15.390599 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 29 16:14:15.441071 sshd[1606]: Connection closed by 10.0.0.1 port 44068
Jan 29 16:14:15.441366 sshd-session[1604]: pam_unix(sshd:session): session closed for user core
Jan 29 16:14:15.452444 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:44068.service: Deactivated successfully.
Jan 29 16:14:15.453856 systemd[1]: session-2.scope: Deactivated successfully.
Jan 29 16:14:15.454522 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit.
Jan 29 16:14:15.463718 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:44072.service - OpenSSH per-connection server daemon (10.0.0.1:44072).
Jan 29 16:14:15.465022 systemd-logind[1472]: Removed session 2.
Jan 29 16:14:15.504670 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 44072 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:14:15.505933 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:14:15.510447 systemd-logind[1472]: New session 3 of user core.
Jan 29 16:14:15.520547 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 29 16:14:15.568185 sshd[1614]: Connection closed by 10.0.0.1 port 44072
Jan 29 16:14:15.568652 sshd-session[1611]: pam_unix(sshd:session): session closed for user core
Jan 29 16:14:15.585671 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:44072.service: Deactivated successfully.
Jan 29 16:14:15.587201 systemd[1]: session-3.scope: Deactivated successfully.
Jan 29 16:14:15.587865 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit.
Jan 29 16:14:15.599683 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:44078.service - OpenSSH per-connection server daemon (10.0.0.1:44078).
Jan 29 16:14:15.600882 systemd-logind[1472]: Removed session 3.
Jan 29 16:14:15.640532 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 44078 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:14:15.642008 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:14:15.645958 systemd-logind[1472]: New session 4 of user core.
Jan 29 16:14:15.652547 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 29 16:14:15.703491 sshd[1622]: Connection closed by 10.0.0.1 port 44078
Jan 29 16:14:15.703924 sshd-session[1619]: pam_unix(sshd:session): session closed for user core
Jan 29 16:14:15.717536 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:44078.service: Deactivated successfully.
Jan 29 16:14:15.719109 systemd[1]: session-4.scope: Deactivated successfully.
Jan 29 16:14:15.720494 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit.
Jan 29 16:14:15.727672 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:44080.service - OpenSSH per-connection server daemon (10.0.0.1:44080).
Jan 29 16:14:15.728857 systemd-logind[1472]: Removed session 4.
Jan 29 16:14:15.769453 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 44080 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:14:15.770592 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:14:15.774368 systemd-logind[1472]: New session 5 of user core.
Jan 29 16:14:15.780528 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 29 16:14:15.837851 sudo[1631]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 29 16:14:15.838108 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 16:14:16.173641 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 29 16:14:16.173733 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Jan 29 16:14:16.411695 dockerd[1651]: time="2025-01-29T16:14:16.411643694Z" level=info msg="Starting up"
Jan 29 16:14:16.553834 dockerd[1651]: time="2025-01-29T16:14:16.553690254Z" level=info msg="Loading containers: start."
Jan 29 16:14:16.689434 kernel: Initializing XFRM netlink socket
Jan 29 16:14:16.750667 systemd-networkd[1408]: docker0: Link UP
Jan 29 16:14:16.781668 dockerd[1651]: time="2025-01-29T16:14:16.781613014Z" level=info msg="Loading containers: done."
Jan 29 16:14:16.798139 dockerd[1651]: time="2025-01-29T16:14:16.798080494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 29 16:14:16.798265 dockerd[1651]: time="2025-01-29T16:14:16.798167934Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Jan 29 16:14:16.798375 dockerd[1651]: time="2025-01-29T16:14:16.798334614Z" level=info msg="Daemon has completed initialization"
Jan 29 16:14:16.824426 dockerd[1651]: time="2025-01-29T16:14:16.824293134Z" level=info msg="API listen on /run/docker.sock"
Jan 29 16:14:16.824455 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 29 16:14:17.985368 containerd[1485]: time="2025-01-29T16:14:17.985325574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\""
Jan 29 16:14:18.620783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581349018.mount: Deactivated successfully.
Jan 29 16:14:19.650181 containerd[1485]: time="2025-01-29T16:14:19.650001614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:19.650644 containerd[1485]: time="2025-01-29T16:14:19.650603854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937"
Jan 29 16:14:19.652206 containerd[1485]: time="2025-01-29T16:14:19.652174494Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:19.654567 containerd[1485]: time="2025-01-29T16:14:19.654524454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:19.655722 containerd[1485]: time="2025-01-29T16:14:19.655617214Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.67025156s"
Jan 29 16:14:19.655722 containerd[1485]: time="2025-01-29T16:14:19.655647814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\""
Jan 29 16:14:19.673451 containerd[1485]: time="2025-01-29T16:14:19.673372494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\""
Jan 29 16:14:20.921377 containerd[1485]: time="2025-01-29T16:14:20.921197934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:20.922188 containerd[1485]: time="2025-01-29T16:14:20.921985934Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563"
Jan 29 16:14:20.922971 containerd[1485]: time="2025-01-29T16:14:20.922928214Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:20.925761 containerd[1485]: time="2025-01-29T16:14:20.925720014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:20.927719 containerd[1485]: time="2025-01-29T16:14:20.927674814Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.25425868s"
Jan 29 16:14:20.927719 containerd[1485]: time="2025-01-29T16:14:20.927704414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\""
Jan 29 16:14:20.945903 containerd[1485]: time="2025-01-29T16:14:20.945796334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\""
Jan 29 16:14:21.764607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 29 16:14:21.771602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:21.858797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:21.861910 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 29 16:14:21.899200 kubelet[1939]: E0129 16:14:21.899161    1939 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 29 16:14:21.903324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 16:14:21.903483 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 29 16:14:21.903814 systemd[1]: kubelet.service: Consumed 123ms CPU time, 96.9M memory peak.
Jan 29 16:14:21.984145 containerd[1485]: time="2025-01-29T16:14:21.984098254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:21.985078 containerd[1485]: time="2025-01-29T16:14:21.984866854Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340"
Jan 29 16:14:21.985716 containerd[1485]: time="2025-01-29T16:14:21.985656894Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:21.988449 containerd[1485]: time="2025-01-29T16:14:21.988402134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:21.989585 containerd[1485]: time="2025-01-29T16:14:21.989511414Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.04368608s"
Jan 29 16:14:21.989585 containerd[1485]: time="2025-01-29T16:14:21.989544814Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\""
Jan 29 16:14:22.006588 containerd[1485]: time="2025-01-29T16:14:22.006553894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\""
Jan 29 16:14:23.056029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245268148.mount: Deactivated successfully.
Jan 29 16:14:23.365535 containerd[1485]: time="2025-01-29T16:14:23.365407934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:23.366339 containerd[1485]: time="2025-01-29T16:14:23.365970174Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714"
Jan 29 16:14:23.366938 containerd[1485]: time="2025-01-29T16:14:23.366908414Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:23.368811 containerd[1485]: time="2025-01-29T16:14:23.368761574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:23.369624 containerd[1485]: time="2025-01-29T16:14:23.369589614Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.36300156s"
Jan 29 16:14:23.369624 containerd[1485]: time="2025-01-29T16:14:23.369624694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\""
Jan 29 16:14:23.389507 containerd[1485]: time="2025-01-29T16:14:23.388593574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 29 16:14:24.029606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752443623.mount: Deactivated successfully.
Jan 29 16:14:24.567247 containerd[1485]: time="2025-01-29T16:14:24.567186454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:24.567760 containerd[1485]: time="2025-01-29T16:14:24.567714334Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Jan 29 16:14:24.568564 containerd[1485]: time="2025-01-29T16:14:24.568529054Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:24.571508 containerd[1485]: time="2025-01-29T16:14:24.571473454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:24.572778 containerd[1485]: time="2025-01-29T16:14:24.572745134Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.18411824s"
Jan 29 16:14:24.572817 containerd[1485]: time="2025-01-29T16:14:24.572777574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 29 16:14:24.590784 containerd[1485]: time="2025-01-29T16:14:24.590748854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Jan 29 16:14:25.043972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059338766.mount: Deactivated successfully.
Jan 29 16:14:25.048933 containerd[1485]: time="2025-01-29T16:14:25.048580694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:25.049713 containerd[1485]: time="2025-01-29T16:14:25.049498374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823"
Jan 29 16:14:25.050565 containerd[1485]: time="2025-01-29T16:14:25.050530894Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:25.052617 containerd[1485]: time="2025-01-29T16:14:25.052588374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:25.053678 containerd[1485]: time="2025-01-29T16:14:25.053451774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 462.65724ms"
Jan 29 16:14:25.053678 containerd[1485]: time="2025-01-29T16:14:25.053483454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Jan 29 16:14:25.071561 containerd[1485]: time="2025-01-29T16:14:25.071525454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\""
Jan 29 16:14:25.546586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862313748.mount: Deactivated successfully.
Jan 29 16:14:27.269887 containerd[1485]: time="2025-01-29T16:14:27.269840054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:27.270808 containerd[1485]: time="2025-01-29T16:14:27.270769494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474"
Jan 29 16:14:27.271413 containerd[1485]: time="2025-01-29T16:14:27.271371614Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:27.274635 containerd[1485]: time="2025-01-29T16:14:27.274580374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:27.276833 containerd[1485]: time="2025-01-29T16:14:27.276800414Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.205237s"
Jan 29 16:14:27.277075 containerd[1485]: time="2025-01-29T16:14:27.276972214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\""
Jan 29 16:14:32.092923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 29 16:14:32.101623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:32.189444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:32.192299 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 29 16:14:32.228953 kubelet[2156]: E0129 16:14:32.228868    2156 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 29 16:14:32.231621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 16:14:32.231767 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 29 16:14:32.233465 systemd[1]: kubelet.service: Consumed 116ms CPU time, 96.4M memory peak.
Jan 29 16:14:32.443379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:32.443542 systemd[1]: kubelet.service: Consumed 116ms CPU time, 96.4M memory peak.
Jan 29 16:14:32.450606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:32.465997 systemd[1]: Reload requested from client PID 2172 ('systemctl') (unit session-5.scope)...
Jan 29 16:14:32.466011 systemd[1]: Reloading...
Jan 29 16:14:32.540429 zram_generator::config[2214]: No configuration found.
Jan 29 16:14:32.627775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:14:32.697993 systemd[1]: Reloading finished in 231 ms.
Jan 29 16:14:32.734287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:32.737355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:32.738016 systemd[1]: kubelet.service: Deactivated successfully.
Jan 29 16:14:32.738201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:32.738237 systemd[1]: kubelet.service: Consumed 75ms CPU time, 82.3M memory peak.
Jan 29 16:14:32.749709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:32.833982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:32.838516 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 29 16:14:32.874146 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:14:32.874146 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 29 16:14:32.874146 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:14:32.874447 kubelet[2263]: I0129 16:14:32.874240    2263 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 29 16:14:34.098782 kubelet[2263]: I0129 16:14:34.098736    2263 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jan 29 16:14:34.098782 kubelet[2263]: I0129 16:14:34.098769    2263 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 29 16:14:34.099124 kubelet[2263]: I0129 16:14:34.098964    2263 server.go:927] "Client rotation is on, will bootstrap in background"
Jan 29 16:14:34.151272 kubelet[2263]: E0129 16:14:34.151241    2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.151343 kubelet[2263]: I0129 16:14:34.151236    2263 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 29 16:14:34.163760 kubelet[2263]: I0129 16:14:34.163732    2263 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 29 16:14:34.164052 kubelet[2263]: I0129 16:14:34.164026    2263 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 29 16:14:34.164210 kubelet[2263]: I0129 16:14:34.164054    2263 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 29 16:14:34.164354 kubelet[2263]: I0129 16:14:34.164343    2263 topology_manager.go:138] "Creating topology manager with none policy"
Jan 29 16:14:34.164354 kubelet[2263]: I0129 16:14:34.164355    2263 container_manager_linux.go:301] "Creating device plugin manager"
Jan 29 16:14:34.165021 kubelet[2263]: I0129 16:14:34.164999    2263 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:14:34.169895 kubelet[2263]: I0129 16:14:34.169716    2263 kubelet.go:400] "Attempting to sync node with API server"
Jan 29 16:14:34.169895 kubelet[2263]: I0129 16:14:34.169739    2263 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 29 16:14:34.170631 kubelet[2263]: I0129 16:14:34.170103    2263 kubelet.go:312] "Adding apiserver pod source"
Jan 29 16:14:34.170631 kubelet[2263]: I0129 16:14:34.170291    2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 29 16:14:34.171142 kubelet[2263]: W0129 16:14:34.170976    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.171142 kubelet[2263]: E0129 16:14:34.171045    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.171142 kubelet[2263]: W0129 16:14:34.171093    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.171142 kubelet[2263]: E0129 16:14:34.171118    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.171802 kubelet[2263]: I0129 16:14:34.171777    2263 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 29 16:14:34.172300 kubelet[2263]: I0129 16:14:34.172289    2263 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 29 16:14:34.172475 kubelet[2263]: W0129 16:14:34.172463    2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 29 16:14:34.173552 kubelet[2263]: I0129 16:14:34.173534    2263 server.go:1264] "Started kubelet"
Jan 29 16:14:34.181188 kubelet[2263]: I0129 16:14:34.180474    2263 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 29 16:14:34.181188 kubelet[2263]: I0129 16:14:34.180628    2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 29 16:14:34.181188 kubelet[2263]: I0129 16:14:34.180922    2263 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 29 16:14:34.181664 kubelet[2263]: E0129 16:14:34.181441    2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f35ed395229fe  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:14:34.173508094 +0000 UTC m=+1.331647081,LastTimestamp:2025-01-29 16:14:34.173508094 +0000 UTC m=+1.331647081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 29 16:14:34.182891 kubelet[2263]: I0129 16:14:34.182776    2263 server.go:455] "Adding debug handlers to kubelet server"
Jan 29 16:14:34.183641 kubelet[2263]: I0129 16:14:34.183624    2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 29 16:14:34.184252 kubelet[2263]: I0129 16:14:34.184166    2263 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 29 16:14:34.184298 kubelet[2263]: I0129 16:14:34.184276    2263 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 29 16:14:34.184525 kubelet[2263]: I0129 16:14:34.184508    2263 reconciler.go:26] "Reconciler: start to sync state"
Jan 29 16:14:34.185064 kubelet[2263]: E0129 16:14:34.185030    2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms"
Jan 29 16:14:34.185126 kubelet[2263]: W0129 16:14:34.185092    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.185153 kubelet[2263]: E0129 16:14:34.185129    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.185418 kubelet[2263]: E0129 16:14:34.185368    2263 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 29 16:14:34.185725 kubelet[2263]: I0129 16:14:34.185705    2263 factory.go:221] Registration of the systemd container factory successfully
Jan 29 16:14:34.185808 kubelet[2263]: I0129 16:14:34.185790    2263 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 29 16:14:34.186935 kubelet[2263]: I0129 16:14:34.186914    2263 factory.go:221] Registration of the containerd container factory successfully
Jan 29 16:14:34.197787 kubelet[2263]: I0129 16:14:34.197765    2263 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 29 16:14:34.197787 kubelet[2263]: I0129 16:14:34.197782    2263 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 29 16:14:34.197874 kubelet[2263]: I0129 16:14:34.197798    2263 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:14:34.201669 kubelet[2263]: I0129 16:14:34.201612    2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 29 16:14:34.202564 kubelet[2263]: I0129 16:14:34.202526    2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 29 16:14:34.202781 kubelet[2263]: I0129 16:14:34.202683    2263 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 29 16:14:34.202781 kubelet[2263]: I0129 16:14:34.202703    2263 kubelet.go:2337] "Starting kubelet main sync loop"
Jan 29 16:14:34.202781 kubelet[2263]: E0129 16:14:34.202740    2263 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 29 16:14:34.203226 kubelet[2263]: W0129 16:14:34.203170    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.203278 kubelet[2263]: E0129 16:14:34.203237    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.285794 kubelet[2263]: I0129 16:14:34.285742    2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:34.286111 kubelet[2263]: E0129 16:14:34.286085    2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost"
Jan 29 16:14:34.288640 kubelet[2263]: I0129 16:14:34.288601    2263 policy_none.go:49] "None policy: Start"
Jan 29 16:14:34.289190 kubelet[2263]: I0129 16:14:34.289162    2263 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 29 16:14:34.289222 kubelet[2263]: I0129 16:14:34.289209    2263 state_mem.go:35] "Initializing new in-memory state store"
Jan 29 16:14:34.294383 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 29 16:14:34.302810 kubelet[2263]: E0129 16:14:34.302777    2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 29 16:14:34.306098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 29 16:14:34.309210 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 29 16:14:34.321321 kubelet[2263]: I0129 16:14:34.321285    2263 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 29 16:14:34.321523 kubelet[2263]: I0129 16:14:34.321481    2263 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 29 16:14:34.321795 kubelet[2263]: I0129 16:14:34.321594    2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 29 16:14:34.322643 kubelet[2263]: E0129 16:14:34.322597    2263 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Jan 29 16:14:34.386671 kubelet[2263]: E0129 16:14:34.386565    2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms"
Jan 29 16:14:34.487587 kubelet[2263]: I0129 16:14:34.487556    2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:34.487872 kubelet[2263]: E0129 16:14:34.487836    2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost"
Jan 29 16:14:34.503162 kubelet[2263]: I0129 16:14:34.503116    2263 topology_manager.go:215] "Topology Admit Handler" podUID="2ea11ed5d3e6fdaae034004777746334" podNamespace="kube-system" podName="kube-apiserver-localhost"
Jan 29 16:14:34.504054 kubelet[2263]: I0129 16:14:34.504018    2263 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Jan 29 16:14:34.504695 kubelet[2263]: I0129 16:14:34.504664    2263 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost"
Jan 29 16:14:34.510661 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice.
Jan 29 16:14:34.533584 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice.
Jan 29 16:14:34.536902 systemd[1]: Created slice kubepods-burstable-pod2ea11ed5d3e6fdaae034004777746334.slice - libcontainer container kubepods-burstable-pod2ea11ed5d3e6fdaae034004777746334.slice.
Jan 29 16:14:34.586431 kubelet[2263]: I0129 16:14:34.586381    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:34.586431 kubelet[2263]: I0129 16:14:34.586427    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:34.586532 kubelet[2263]: I0129 16:14:34.586452    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:34.586532 kubelet[2263]: I0129 16:14:34.586468    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:34.586532 kubelet[2263]: I0129 16:14:34.586484    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:34.586532 kubelet[2263]: I0129 16:14:34.586502    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost"
Jan 29 16:14:34.586532 kubelet[2263]: I0129 16:14:34.586517    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:34.586708 kubelet[2263]: I0129 16:14:34.586530    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:34.586708 kubelet[2263]: I0129 16:14:34.586545    2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:34.787726 kubelet[2263]: E0129 16:14:34.787631    2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms"
Jan 29 16:14:34.832497 containerd[1485]: time="2025-01-29T16:14:34.832449254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}"
Jan 29 16:14:34.836953 containerd[1485]: time="2025-01-29T16:14:34.836907494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}"
Jan 29 16:14:34.839615 containerd[1485]: time="2025-01-29T16:14:34.839580454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ea11ed5d3e6fdaae034004777746334,Namespace:kube-system,Attempt:0,}"
Jan 29 16:14:34.889560 kubelet[2263]: I0129 16:14:34.889528    2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:34.889828 kubelet[2263]: E0129 16:14:34.889807    2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost"
Jan 29 16:14:34.996803 kubelet[2263]: W0129 16:14:34.996718    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:34.996803 kubelet[2263]: E0129 16:14:34.996783    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.302362 kubelet[2263]: W0129 16:14:35.302302    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.302362 kubelet[2263]: E0129 16:14:35.302365    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.305328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397694903.mount: Deactivated successfully.
Jan 29 16:14:35.310223 containerd[1485]: time="2025-01-29T16:14:35.310167494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:14:35.311367 containerd[1485]: time="2025-01-29T16:14:35.311339734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:14:35.312648 containerd[1485]: time="2025-01-29T16:14:35.312597134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Jan 29 16:14:35.313164 containerd[1485]: time="2025-01-29T16:14:35.313133254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 29 16:14:35.314745 containerd[1485]: time="2025-01-29T16:14:35.314718574Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:14:35.317027 containerd[1485]: time="2025-01-29T16:14:35.316898774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 29 16:14:35.317027 containerd[1485]: time="2025-01-29T16:14:35.316985614Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:14:35.319982 containerd[1485]: time="2025-01-29T16:14:35.319958774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.32212ms"
Jan 29 16:14:35.320417 containerd[1485]: time="2025-01-29T16:14:35.320349854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 16:14:35.321206 containerd[1485]: time="2025-01-29T16:14:35.321179014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.64872ms"
Jan 29 16:14:35.323988 containerd[1485]: time="2025-01-29T16:14:35.323802854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 486.83728ms"
Jan 29 16:14:35.458742 containerd[1485]: time="2025-01-29T16:14:35.458484974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:14:35.458742 containerd[1485]: time="2025-01-29T16:14:35.458554614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:14:35.458742 containerd[1485]: time="2025-01-29T16:14:35.458569694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.458742 containerd[1485]: time="2025-01-29T16:14:35.458638414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.460670 containerd[1485]: time="2025-01-29T16:14:35.460501774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:14:35.460670 containerd[1485]: time="2025-01-29T16:14:35.460558054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:14:35.460670 containerd[1485]: time="2025-01-29T16:14:35.460569174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.460670 containerd[1485]: time="2025-01-29T16:14:35.460648094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.463568 containerd[1485]: time="2025-01-29T16:14:35.463295174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:14:35.463568 containerd[1485]: time="2025-01-29T16:14:35.463348974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:14:35.463568 containerd[1485]: time="2025-01-29T16:14:35.463360294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.463568 containerd[1485]: time="2025-01-29T16:14:35.463488654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:35.476558 systemd[1]: Started cri-containerd-1364b0d1fc0e982cf5033a106eb5a04c90fe7bb3c754f93a19ac4ab3f2e29203.scope - libcontainer container 1364b0d1fc0e982cf5033a106eb5a04c90fe7bb3c754f93a19ac4ab3f2e29203.
Jan 29 16:14:35.479894 systemd[1]: Started cri-containerd-08d1f1581602bd44c1978740382d0ceefa264edb9c50357cff40ab1c120d9486.scope - libcontainer container 08d1f1581602bd44c1978740382d0ceefa264edb9c50357cff40ab1c120d9486.
Jan 29 16:14:35.481355 systemd[1]: Started cri-containerd-ad5588921006c9f0e5d3f076f493248e86d17c17e4065a7337927ddc64482281.scope - libcontainer container ad5588921006c9f0e5d3f076f493248e86d17c17e4065a7337927ddc64482281.
Jan 29 16:14:35.504617 containerd[1485]: time="2025-01-29T16:14:35.504581094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ea11ed5d3e6fdaae034004777746334,Namespace:kube-system,Attempt:0,} returns sandbox id \"1364b0d1fc0e982cf5033a106eb5a04c90fe7bb3c754f93a19ac4ab3f2e29203\""
Jan 29 16:14:35.509561 containerd[1485]: time="2025-01-29T16:14:35.509529014Z" level=info msg="CreateContainer within sandbox \"1364b0d1fc0e982cf5033a106eb5a04c90fe7bb3c754f93a19ac4ab3f2e29203\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jan 29 16:14:35.511866 containerd[1485]: time="2025-01-29T16:14:35.511778734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad5588921006c9f0e5d3f076f493248e86d17c17e4065a7337927ddc64482281\""
Jan 29 16:14:35.515289 containerd[1485]: time="2025-01-29T16:14:35.515258854Z" level=info msg="CreateContainer within sandbox \"ad5588921006c9f0e5d3f076f493248e86d17c17e4065a7337927ddc64482281\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jan 29 16:14:35.518398 containerd[1485]: time="2025-01-29T16:14:35.518259254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d1f1581602bd44c1978740382d0ceefa264edb9c50357cff40ab1c120d9486\""
Jan 29 16:14:35.521078 containerd[1485]: time="2025-01-29T16:14:35.521052534Z" level=info msg="CreateContainer within sandbox \"08d1f1581602bd44c1978740382d0ceefa264edb9c50357cff40ab1c120d9486\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jan 29 16:14:35.526526 containerd[1485]: time="2025-01-29T16:14:35.526493294Z" level=info msg="CreateContainer within sandbox \"1364b0d1fc0e982cf5033a106eb5a04c90fe7bb3c754f93a19ac4ab3f2e29203\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bb55836d340a813bc127945d763aaaf63b7c8a9cda5c6cb624e38b643425564\""
Jan 29 16:14:35.527184 containerd[1485]: time="2025-01-29T16:14:35.527124134Z" level=info msg="StartContainer for \"7bb55836d340a813bc127945d763aaaf63b7c8a9cda5c6cb624e38b643425564\""
Jan 29 16:14:35.531074 containerd[1485]: time="2025-01-29T16:14:35.531041374Z" level=info msg="CreateContainer within sandbox \"ad5588921006c9f0e5d3f076f493248e86d17c17e4065a7337927ddc64482281\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa1aabab5a7214a513233f2bfe32a212a2875cbb39e7db4fb288b56db446defd\""
Jan 29 16:14:35.531782 containerd[1485]: time="2025-01-29T16:14:35.531592654Z" level=info msg="StartContainer for \"aa1aabab5a7214a513233f2bfe32a212a2875cbb39e7db4fb288b56db446defd\""
Jan 29 16:14:35.539190 containerd[1485]: time="2025-01-29T16:14:35.539156174Z" level=info msg="CreateContainer within sandbox \"08d1f1581602bd44c1978740382d0ceefa264edb9c50357cff40ab1c120d9486\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"223eca6dc1e09e3e751838526d0133144f3d4a3d6875b85e10df5fa6ab649388\""
Jan 29 16:14:35.539846 containerd[1485]: time="2025-01-29T16:14:35.539637534Z" level=info msg="StartContainer for \"223eca6dc1e09e3e751838526d0133144f3d4a3d6875b85e10df5fa6ab649388\""
Jan 29 16:14:35.555535 systemd[1]: Started cri-containerd-7bb55836d340a813bc127945d763aaaf63b7c8a9cda5c6cb624e38b643425564.scope - libcontainer container 7bb55836d340a813bc127945d763aaaf63b7c8a9cda5c6cb624e38b643425564.
Jan 29 16:14:35.557555 systemd[1]: Started cri-containerd-aa1aabab5a7214a513233f2bfe32a212a2875cbb39e7db4fb288b56db446defd.scope - libcontainer container aa1aabab5a7214a513233f2bfe32a212a2875cbb39e7db4fb288b56db446defd.
Jan 29 16:14:35.560929 systemd[1]: Started cri-containerd-223eca6dc1e09e3e751838526d0133144f3d4a3d6875b85e10df5fa6ab649388.scope - libcontainer container 223eca6dc1e09e3e751838526d0133144f3d4a3d6875b85e10df5fa6ab649388.
Jan 29 16:14:35.565669 kubelet[2263]: W0129 16:14:35.565637    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.565731 kubelet[2263]: E0129 16:14:35.565675    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.588824 kubelet[2263]: E0129 16:14:35.588770    2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s"
Jan 29 16:14:35.593359 containerd[1485]: time="2025-01-29T16:14:35.593264334Z" level=info msg="StartContainer for \"7bb55836d340a813bc127945d763aaaf63b7c8a9cda5c6cb624e38b643425564\" returns successfully"
Jan 29 16:14:35.615375 containerd[1485]: time="2025-01-29T16:14:35.615345814Z" level=info msg="StartContainer for \"aa1aabab5a7214a513233f2bfe32a212a2875cbb39e7db4fb288b56db446defd\" returns successfully"
Jan 29 16:14:35.615851 containerd[1485]: time="2025-01-29T16:14:35.615470174Z" level=info msg="StartContainer for \"223eca6dc1e09e3e751838526d0133144f3d4a3d6875b85e10df5fa6ab649388\" returns successfully"
Jan 29 16:14:35.691953 kubelet[2263]: I0129 16:14:35.691646    2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:35.691953 kubelet[2263]: E0129 16:14:35.691924    2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost"
Jan 29 16:14:35.744737 kubelet[2263]: W0129 16:14:35.744677    2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:35.744899 kubelet[2263]: E0129 16:14:35.744824    2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused
Jan 29 16:14:37.169531 kubelet[2263]: E0129 16:14:37.169428    2263 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f35ed395229fe  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:14:34.173508094 +0000 UTC m=+1.331647081,LastTimestamp:2025-01-29 16:14:34.173508094 +0000 UTC m=+1.331647081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 29 16:14:37.192312 kubelet[2263]: E0129 16:14:37.192278    2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Jan 29 16:14:37.223593 kubelet[2263]: E0129 16:14:37.223502    2263 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f35ed3a06ea56  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:14:34.185353814 +0000 UTC m=+1.343492841,LastTimestamp:2025-01-29 16:14:34.185353814 +0000 UTC m=+1.343492841,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 29 16:14:37.277248 kubelet[2263]: E0129 16:14:37.277138    2263 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f35ed3ab1f176  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:14:34.196562294 +0000 UTC m=+1.354701281,LastTimestamp:2025-01-29 16:14:34.196562294 +0000 UTC m=+1.354701281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 29 16:14:37.294972 kubelet[2263]: I0129 16:14:37.294948    2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:37.302847 kubelet[2263]: I0129 16:14:37.302761    2263 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Jan 29 16:14:37.312401 kubelet[2263]: E0129 16:14:37.312362    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.412798 kubelet[2263]: E0129 16:14:37.412761    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.513690 kubelet[2263]: E0129 16:14:37.513558    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.614131 kubelet[2263]: E0129 16:14:37.614090    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.714878 kubelet[2263]: E0129 16:14:37.714834    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.815422 kubelet[2263]: E0129 16:14:37.815304    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:37.916199 kubelet[2263]: E0129 16:14:37.916145    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.016709 kubelet[2263]: E0129 16:14:38.016663    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.117239 kubelet[2263]: E0129 16:14:38.117201    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.217380 kubelet[2263]: E0129 16:14:38.217347    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.317942 kubelet[2263]: E0129 16:14:38.317905    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.419013 kubelet[2263]: E0129 16:14:38.418912    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.519499 kubelet[2263]: E0129 16:14:38.519452    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.619944 kubelet[2263]: E0129 16:14:38.619895    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:38.720754 kubelet[2263]: E0129 16:14:38.720644    2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:39.173780 kubelet[2263]: I0129 16:14:39.173738    2263 apiserver.go:52] "Watching apiserver"
Jan 29 16:14:39.185338 kubelet[2263]: I0129 16:14:39.185300    2263 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 29 16:14:39.279706 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-5.scope)...
Jan 29 16:14:39.279737 systemd[1]: Reloading...
Jan 29 16:14:39.356442 zram_generator::config[2582]: No configuration found.
Jan 29 16:14:39.437835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 16:14:39.523976 systemd[1]: Reloading finished in 243 ms.
Jan 29 16:14:39.544835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:39.555520 systemd[1]: kubelet.service: Deactivated successfully.
Jan 29 16:14:39.555804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:39.555881 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 114.2M memory peak.
Jan 29 16:14:39.566643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 16:14:39.667231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 16:14:39.672209 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 29 16:14:39.716570 kubelet[2621]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:14:39.716570 kubelet[2621]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 29 16:14:39.716570 kubelet[2621]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 16:14:39.717045 kubelet[2621]: I0129 16:14:39.716553    2621 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 29 16:14:39.720916 kubelet[2621]: I0129 16:14:39.720877    2621 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jan 29 16:14:39.720916 kubelet[2621]: I0129 16:14:39.720907    2621 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 29 16:14:39.721101 kubelet[2621]: I0129 16:14:39.721071    2621 server.go:927] "Client rotation is on, will bootstrap in background"
Jan 29 16:14:39.722460 kubelet[2621]: I0129 16:14:39.722434    2621 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 29 16:14:39.723766 kubelet[2621]: I0129 16:14:39.723608    2621 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 29 16:14:39.731003 kubelet[2621]: I0129 16:14:39.730627    2621 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 29 16:14:39.731003 kubelet[2621]: I0129 16:14:39.730823    2621 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 29 16:14:39.731003 kubelet[2621]: I0129 16:14:39.730846    2621 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 29 16:14:39.731003 kubelet[2621]: I0129 16:14:39.731009    2621 topology_manager.go:138] "Creating topology manager with none policy"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731019    2621 container_manager_linux.go:301] "Creating device plugin manager"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731051    2621 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731156    2621 kubelet.go:400] "Attempting to sync node with API server"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731168    2621 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731194    2621 kubelet.go:312] "Adding apiserver pod source"
Jan 29 16:14:39.731250 kubelet[2621]: I0129 16:14:39.731210    2621 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 29 16:14:39.732112 kubelet[2621]: I0129 16:14:39.731784    2621 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 29 16:14:39.732112 kubelet[2621]: I0129 16:14:39.732010    2621 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 29 16:14:39.734162 kubelet[2621]: I0129 16:14:39.732610    2621 server.go:1264] "Started kubelet"
Jan 29 16:14:39.734162 kubelet[2621]: I0129 16:14:39.732792    2621 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 29 16:14:39.734162 kubelet[2621]: I0129 16:14:39.732908    2621 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 29 16:14:39.734162 kubelet[2621]: I0129 16:14:39.733130    2621 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 29 16:14:39.734162 kubelet[2621]: I0129 16:14:39.733668    2621 server.go:455] "Adding debug handlers to kubelet server"
Jan 29 16:14:39.735435 kubelet[2621]: I0129 16:14:39.735414    2621 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 29 16:14:39.735698 kubelet[2621]: I0129 16:14:39.735551    2621 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 29 16:14:39.735698 kubelet[2621]: I0129 16:14:39.735642    2621 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 29 16:14:39.735965 kubelet[2621]: I0129 16:14:39.735760    2621 reconciler.go:26] "Reconciler: start to sync state"
Jan 29 16:14:39.736315 kubelet[2621]: E0129 16:14:39.736286    2621 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 29 16:14:39.736977 kubelet[2621]: I0129 16:14:39.736888    2621 factory.go:221] Registration of the systemd container factory successfully
Jan 29 16:14:39.737030 kubelet[2621]: I0129 16:14:39.736978    2621 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 29 16:14:39.737905 kubelet[2621]: I0129 16:14:39.737788    2621 factory.go:221] Registration of the containerd container factory successfully
Jan 29 16:14:39.738544 kubelet[2621]: E0129 16:14:39.738466    2621 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 29 16:14:39.763575 kubelet[2621]: I0129 16:14:39.763530    2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 29 16:14:39.765214 kubelet[2621]: I0129 16:14:39.765082    2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 29 16:14:39.765214 kubelet[2621]: I0129 16:14:39.765121    2621 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 29 16:14:39.765214 kubelet[2621]: I0129 16:14:39.765136    2621 kubelet.go:2337] "Starting kubelet main sync loop"
Jan 29 16:14:39.765214 kubelet[2621]: E0129 16:14:39.765188    2621 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 29 16:14:39.798246 kubelet[2621]: I0129 16:14:39.798219    2621 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 29 16:14:39.798246 kubelet[2621]: I0129 16:14:39.798237    2621 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 29 16:14:39.798246 kubelet[2621]: I0129 16:14:39.798258    2621 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 16:14:39.798492 kubelet[2621]: I0129 16:14:39.798430    2621 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jan 29 16:14:39.798492 kubelet[2621]: I0129 16:14:39.798442    2621 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jan 29 16:14:39.798492 kubelet[2621]: I0129 16:14:39.798460    2621 policy_none.go:49] "None policy: Start"
Jan 29 16:14:39.799006 kubelet[2621]: I0129 16:14:39.798967    2621 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 29 16:14:39.799006 kubelet[2621]: I0129 16:14:39.799004    2621 state_mem.go:35] "Initializing new in-memory state store"
Jan 29 16:14:39.799173 kubelet[2621]: I0129 16:14:39.799157    2621 state_mem.go:75] "Updated machine memory state"
Jan 29 16:14:39.803610 kubelet[2621]: I0129 16:14:39.803571    2621 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 29 16:14:39.803901 kubelet[2621]: I0129 16:14:39.803725    2621 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 29 16:14:39.803901 kubelet[2621]: I0129 16:14:39.803826    2621 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 29 16:14:39.842715 kubelet[2621]: I0129 16:14:39.842672    2621 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Jan 29 16:14:39.850048 kubelet[2621]: I0129 16:14:39.849866    2621 kubelet_node_status.go:112] "Node was previously registered" node="localhost"
Jan 29 16:14:39.850048 kubelet[2621]: I0129 16:14:39.849948    2621 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Jan 29 16:14:39.865882 kubelet[2621]: I0129 16:14:39.865842    2621 topology_manager.go:215] "Topology Admit Handler" podUID="2ea11ed5d3e6fdaae034004777746334" podNamespace="kube-system" podName="kube-apiserver-localhost"
Jan 29 16:14:39.866481 kubelet[2621]: I0129 16:14:39.865949    2621 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Jan 29 16:14:39.866481 kubelet[2621]: I0129 16:14:39.865997    2621 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost"
Jan 29 16:14:39.871316 kubelet[2621]: E0129 16:14:39.871284    2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.036665 kubelet[2621]: I0129 16:14:40.036553    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:40.039250 kubelet[2621]: I0129 16:14:40.037371    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.039250 kubelet[2621]: I0129 16:14:40.037448    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.039250 kubelet[2621]: I0129 16:14:40.037481    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.039250 kubelet[2621]: I0129 16:14:40.037507    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost"
Jan 29 16:14:40.039250 kubelet[2621]: I0129 16:14:40.037526    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:40.039468 kubelet[2621]: I0129 16:14:40.037548    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:40.039468 kubelet[2621]: I0129 16:14:40.037571    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.039468 kubelet[2621]: I0129 16:14:40.037593    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost"
Jan 29 16:14:40.731897 kubelet[2621]: I0129 16:14:40.731851    2621 apiserver.go:52] "Watching apiserver"
Jan 29 16:14:40.790423 kubelet[2621]: E0129 16:14:40.789997    2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Jan 29 16:14:40.808084 kubelet[2621]: I0129 16:14:40.808022    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.807988734 podStartE2EDuration="1.807988734s" podCreationTimestamp="2025-01-29 16:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:40.801355894 +0000 UTC m=+1.124853561" watchObservedRunningTime="2025-01-29 16:14:40.807988734 +0000 UTC m=+1.131486361"
Jan 29 16:14:40.819112 kubelet[2621]: I0129 16:14:40.818974    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.818958974 podStartE2EDuration="1.818958974s" podCreationTimestamp="2025-01-29 16:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:40.818807454 +0000 UTC m=+1.142305121" watchObservedRunningTime="2025-01-29 16:14:40.818958974 +0000 UTC m=+1.142456681"
Jan 29 16:14:40.819112 kubelet[2621]: I0129 16:14:40.819043    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.819037974 podStartE2EDuration="1.819037974s" podCreationTimestamp="2025-01-29 16:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:40.808141694 +0000 UTC m=+1.131639361" watchObservedRunningTime="2025-01-29 16:14:40.819037974 +0000 UTC m=+1.142535641"
Jan 29 16:14:40.836337 kubelet[2621]: I0129 16:14:40.836267    2621 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 29 16:14:41.451512 sudo[1631]: pam_unix(sudo:session): session closed for user root
Jan 29 16:14:41.452798 sshd[1630]: Connection closed by 10.0.0.1 port 44080
Jan 29 16:14:41.453208 sshd-session[1627]: pam_unix(sshd:session): session closed for user core
Jan 29 16:14:41.455704 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:44080.service: Deactivated successfully.
Jan 29 16:14:41.457566 systemd[1]: session-5.scope: Deactivated successfully.
Jan 29 16:14:41.457764 systemd[1]: session-5.scope: Consumed 6.469s CPU time, 261.6M memory peak.
Jan 29 16:14:41.459270 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit.
Jan 29 16:14:41.460339 systemd-logind[1472]: Removed session 5.
Jan 29 16:14:53.719224 update_engine[1478]: I20250129 16:14:53.719136  1478 update_attempter.cc:509] Updating boot flags...
Jan 29 16:14:53.742450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2696)
Jan 29 16:14:53.780518 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2697)
Jan 29 16:14:53.816464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2697)
Jan 29 16:14:54.344760 kubelet[2621]: I0129 16:14:54.344720    2621 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jan 29 16:14:54.345429 containerd[1485]: time="2025-01-29T16:14:54.345306746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 29 16:14:54.345690 kubelet[2621]: I0129 16:14:54.345531    2621 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jan 29 16:14:55.293411 kubelet[2621]: I0129 16:14:55.292737    2621 topology_manager.go:215] "Topology Admit Handler" podUID="c19e9b2f-24fe-4491-b29a-533da8d688ca" podNamespace="kube-system" podName="kube-proxy-s9kxb"
Jan 29 16:14:55.295908 kubelet[2621]: I0129 16:14:55.295757    2621 topology_manager.go:215] "Topology Admit Handler" podUID="b8de4d02-8042-42ae-8756-5f03f23609e9" podNamespace="kube-flannel" podName="kube-flannel-ds-tw7kn"
Jan 29 16:14:55.304535 systemd[1]: Created slice kubepods-besteffort-podc19e9b2f_24fe_4491_b29a_533da8d688ca.slice - libcontainer container kubepods-besteffort-podc19e9b2f_24fe_4491_b29a_533da8d688ca.slice.
Jan 29 16:14:55.312491 systemd[1]: Created slice kubepods-burstable-podb8de4d02_8042_42ae_8756_5f03f23609e9.slice - libcontainer container kubepods-burstable-podb8de4d02_8042_42ae_8756_5f03f23609e9.slice.
Jan 29 16:14:55.344970 kubelet[2621]: I0129 16:14:55.344907    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c19e9b2f-24fe-4491-b29a-533da8d688ca-lib-modules\") pod \"kube-proxy-s9kxb\" (UID: \"c19e9b2f-24fe-4491-b29a-533da8d688ca\") " pod="kube-system/kube-proxy-s9kxb"
Jan 29 16:14:55.345691 kubelet[2621]: I0129 16:14:55.344943    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc92t\" (UniqueName: \"kubernetes.io/projected/c19e9b2f-24fe-4491-b29a-533da8d688ca-kube-api-access-hc92t\") pod \"kube-proxy-s9kxb\" (UID: \"c19e9b2f-24fe-4491-b29a-533da8d688ca\") " pod="kube-system/kube-proxy-s9kxb"
Jan 29 16:14:55.345691 kubelet[2621]: I0129 16:14:55.345565    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hrl\" (UniqueName: \"kubernetes.io/projected/b8de4d02-8042-42ae-8756-5f03f23609e9-kube-api-access-z8hrl\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.345691 kubelet[2621]: I0129 16:14:55.345597    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c19e9b2f-24fe-4491-b29a-533da8d688ca-kube-proxy\") pod \"kube-proxy-s9kxb\" (UID: \"c19e9b2f-24fe-4491-b29a-533da8d688ca\") " pod="kube-system/kube-proxy-s9kxb"
Jan 29 16:14:55.345691 kubelet[2621]: I0129 16:14:55.345617    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b8de4d02-8042-42ae-8756-5f03f23609e9-cni\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.345691 kubelet[2621]: I0129 16:14:55.345639    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8de4d02-8042-42ae-8756-5f03f23609e9-run\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.348553 kubelet[2621]: I0129 16:14:55.345657    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b8de4d02-8042-42ae-8756-5f03f23609e9-cni-plugin\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.348553 kubelet[2621]: I0129 16:14:55.345703    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8de4d02-8042-42ae-8756-5f03f23609e9-xtables-lock\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.348553 kubelet[2621]: I0129 16:14:55.345806    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b8de4d02-8042-42ae-8756-5f03f23609e9-flannel-cfg\") pod \"kube-flannel-ds-tw7kn\" (UID: \"b8de4d02-8042-42ae-8756-5f03f23609e9\") " pod="kube-flannel/kube-flannel-ds-tw7kn"
Jan 29 16:14:55.348553 kubelet[2621]: I0129 16:14:55.345845    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c19e9b2f-24fe-4491-b29a-533da8d688ca-xtables-lock\") pod \"kube-proxy-s9kxb\" (UID: \"c19e9b2f-24fe-4491-b29a-533da8d688ca\") " pod="kube-system/kube-proxy-s9kxb"
Jan 29 16:14:55.611662 containerd[1485]: time="2025-01-29T16:14:55.611619962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9kxb,Uid:c19e9b2f-24fe-4491-b29a-533da8d688ca,Namespace:kube-system,Attempt:0,}"
Jan 29 16:14:55.617241 containerd[1485]: time="2025-01-29T16:14:55.617202534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tw7kn,Uid:b8de4d02-8042-42ae-8756-5f03f23609e9,Namespace:kube-flannel,Attempt:0,}"
Jan 29 16:14:55.629587 containerd[1485]: time="2025-01-29T16:14:55.629357919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:14:55.629587 containerd[1485]: time="2025-01-29T16:14:55.629433919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:14:55.629587 containerd[1485]: time="2025-01-29T16:14:55.629457200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:55.629587 containerd[1485]: time="2025-01-29T16:14:55.629537120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:55.647579 systemd[1]: Started cri-containerd-fae047f0db863aa4dc522156ad62f36e620fffa7d3e7784691fd5349de507c3b.scope - libcontainer container fae047f0db863aa4dc522156ad62f36e620fffa7d3e7784691fd5349de507c3b.
Jan 29 16:14:55.650664 containerd[1485]: time="2025-01-29T16:14:55.650442243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:14:55.650664 containerd[1485]: time="2025-01-29T16:14:55.650496843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:14:55.650664 containerd[1485]: time="2025-01-29T16:14:55.650507923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:55.650664 containerd[1485]: time="2025-01-29T16:14:55.650573364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:14:55.667558 systemd[1]: Started cri-containerd-29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0.scope - libcontainer container 29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0.
Jan 29 16:14:55.668517 containerd[1485]: time="2025-01-29T16:14:55.668438841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9kxb,Uid:c19e9b2f-24fe-4491-b29a-533da8d688ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"fae047f0db863aa4dc522156ad62f36e620fffa7d3e7784691fd5349de507c3b\""
Jan 29 16:14:55.673131 containerd[1485]: time="2025-01-29T16:14:55.672807250Z" level=info msg="CreateContainer within sandbox \"fae047f0db863aa4dc522156ad62f36e620fffa7d3e7784691fd5349de507c3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 29 16:14:55.685605 containerd[1485]: time="2025-01-29T16:14:55.685570757Z" level=info msg="CreateContainer within sandbox \"fae047f0db863aa4dc522156ad62f36e620fffa7d3e7784691fd5349de507c3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6fe4b05726f6d1bd0779221d63edf9ab3807a4f2513a7db875f73807e30d8c3d\""
Jan 29 16:14:55.686206 containerd[1485]: time="2025-01-29T16:14:55.686169598Z" level=info msg="StartContainer for \"6fe4b05726f6d1bd0779221d63edf9ab3807a4f2513a7db875f73807e30d8c3d\""
Jan 29 16:14:55.697045 containerd[1485]: time="2025-01-29T16:14:55.697013981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tw7kn,Uid:b8de4d02-8042-42ae-8756-5f03f23609e9,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\""
Jan 29 16:14:55.698538 containerd[1485]: time="2025-01-29T16:14:55.698514744Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\""
Jan 29 16:14:55.711540 systemd[1]: Started cri-containerd-6fe4b05726f6d1bd0779221d63edf9ab3807a4f2513a7db875f73807e30d8c3d.scope - libcontainer container 6fe4b05726f6d1bd0779221d63edf9ab3807a4f2513a7db875f73807e30d8c3d.
Jan 29 16:14:55.734544 containerd[1485]: time="2025-01-29T16:14:55.734488019Z" level=info msg="StartContainer for \"6fe4b05726f6d1bd0779221d63edf9ab3807a4f2513a7db875f73807e30d8c3d\" returns successfully"
Jan 29 16:14:56.723375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700101292.mount: Deactivated successfully.
Jan 29 16:14:56.752577 containerd[1485]: time="2025-01-29T16:14:56.752356847Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:56.753303 containerd[1485]: time="2025-01-29T16:14:56.753267689Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532"
Jan 29 16:14:56.754020 containerd[1485]: time="2025-01-29T16:14:56.753991570Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:56.756487 containerd[1485]: time="2025-01-29T16:14:56.756453815Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:56.757577 containerd[1485]: time="2025-01-29T16:14:56.757450537Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.058903033s"
Jan 29 16:14:56.757577 containerd[1485]: time="2025-01-29T16:14:56.757480017Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\""
Jan 29 16:14:56.759526 containerd[1485]: time="2025-01-29T16:14:56.759489541Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Jan 29 16:14:56.768326 containerd[1485]: time="2025-01-29T16:14:56.768288598Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c\""
Jan 29 16:14:56.768456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974678132.mount: Deactivated successfully.
Jan 29 16:14:56.769410 containerd[1485]: time="2025-01-29T16:14:56.768673319Z" level=info msg="StartContainer for \"c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c\""
Jan 29 16:14:56.796547 systemd[1]: Started cri-containerd-c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c.scope - libcontainer container c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c.
Jan 29 16:14:56.819488 containerd[1485]: time="2025-01-29T16:14:56.819438778Z" level=info msg="StartContainer for \"c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c\" returns successfully"
Jan 29 16:14:56.822331 systemd[1]: cri-containerd-c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c.scope: Deactivated successfully.
Jan 29 16:14:56.871137 containerd[1485]: time="2025-01-29T16:14:56.862838943Z" level=info msg="shim disconnected" id=c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c namespace=k8s.io
Jan 29 16:14:56.871137 containerd[1485]: time="2025-01-29T16:14:56.871121879Z" level=warning msg="cleaning up after shim disconnected" id=c0e010fed19c8a1bc1fdb2bc69d85228355f004ed35f5618581dbcd6aba98d4c namespace=k8s.io
Jan 29 16:14:56.871137 containerd[1485]: time="2025-01-29T16:14:56.871134639Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:14:57.818772 containerd[1485]: time="2025-01-29T16:14:57.818664115Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\""
Jan 29 16:14:57.828165 kubelet[2621]: I0129 16:14:57.828097    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s9kxb" podStartSLOduration=2.828077812 podStartE2EDuration="2.828077812s" podCreationTimestamp="2025-01-29 16:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:14:55.8213962 +0000 UTC m=+16.144893867" watchObservedRunningTime="2025-01-29 16:14:57.828077812 +0000 UTC m=+18.151575479"
Jan 29 16:14:58.925287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676468037.mount: Deactivated successfully.
Jan 29 16:14:59.810930 containerd[1485]: time="2025-01-29T16:14:59.810885917Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:59.811512 containerd[1485]: time="2025-01-29T16:14:59.811469158Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261"
Jan 29 16:14:59.812121 containerd[1485]: time="2025-01-29T16:14:59.812090959Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:59.814774 containerd[1485]: time="2025-01-29T16:14:59.814735043Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 16:14:59.816031 containerd[1485]: time="2025-01-29T16:14:59.816001885Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.99729297s"
Jan 29 16:14:59.816143 containerd[1485]: time="2025-01-29T16:14:59.816034285Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\""
Jan 29 16:14:59.820423 containerd[1485]: time="2025-01-29T16:14:59.820279972Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 29 16:14:59.841780 containerd[1485]: time="2025-01-29T16:14:59.841727687Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9\""
Jan 29 16:14:59.842760 containerd[1485]: time="2025-01-29T16:14:59.842726409Z" level=info msg="StartContainer for \"72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9\""
Jan 29 16:14:59.869568 systemd[1]: Started cri-containerd-72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9.scope - libcontainer container 72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9.
Jan 29 16:14:59.893908 containerd[1485]: time="2025-01-29T16:14:59.893870211Z" level=info msg="StartContainer for \"72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9\" returns successfully"
Jan 29 16:14:59.900325 systemd[1]: cri-containerd-72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9.scope: Deactivated successfully.
Jan 29 16:14:59.915467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9-rootfs.mount: Deactivated successfully.
Jan 29 16:14:59.970635 kubelet[2621]: I0129 16:14:59.970604    2621 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 29 16:14:59.991361 containerd[1485]: time="2025-01-29T16:14:59.991294408Z" level=info msg="shim disconnected" id=72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9 namespace=k8s.io
Jan 29 16:14:59.991361 containerd[1485]: time="2025-01-29T16:14:59.991349808Z" level=warning msg="cleaning up after shim disconnected" id=72a1ea1c18a00fdd4399142b37a4b5810f451ff281a4f5a475a9c41b6b9943b9 namespace=k8s.io
Jan 29 16:14:59.991361 containerd[1485]: time="2025-01-29T16:14:59.991359648Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 16:15:00.003176 kubelet[2621]: I0129 16:15:00.002532    2621 topology_manager.go:215] "Topology Admit Handler" podUID="e8c0f224-5f18-453b-af3c-62f6c0800f6d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r9gxm"
Jan 29 16:15:00.003176 kubelet[2621]: I0129 16:15:00.002666    2621 topology_manager.go:215] "Topology Admit Handler" podUID="b15aab6c-7c83-472e-84a3-9a7637e0c1d2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x75kk"
Jan 29 16:15:00.007631 containerd[1485]: time="2025-01-29T16:15:00.007534634Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:15:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 29 16:15:00.010539 systemd[1]: Created slice kubepods-burstable-podb15aab6c_7c83_472e_84a3_9a7637e0c1d2.slice - libcontainer container kubepods-burstable-podb15aab6c_7c83_472e_84a3_9a7637e0c1d2.slice.
Jan 29 16:15:00.016361 systemd[1]: Created slice kubepods-burstable-pode8c0f224_5f18_453b_af3c_62f6c0800f6d.slice - libcontainer container kubepods-burstable-pode8c0f224_5f18_453b_af3c_62f6c0800f6d.slice.
Jan 29 16:15:00.076474 kubelet[2621]: I0129 16:15:00.076351    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c0f224-5f18-453b-af3c-62f6c0800f6d-config-volume\") pod \"coredns-7db6d8ff4d-r9gxm\" (UID: \"e8c0f224-5f18-453b-af3c-62f6c0800f6d\") " pod="kube-system/coredns-7db6d8ff4d-r9gxm"
Jan 29 16:15:00.076474 kubelet[2621]: I0129 16:15:00.076427    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgcw\" (UniqueName: \"kubernetes.io/projected/e8c0f224-5f18-453b-af3c-62f6c0800f6d-kube-api-access-rzgcw\") pod \"coredns-7db6d8ff4d-r9gxm\" (UID: \"e8c0f224-5f18-453b-af3c-62f6c0800f6d\") " pod="kube-system/coredns-7db6d8ff4d-r9gxm"
Jan 29 16:15:00.076474 kubelet[2621]: I0129 16:15:00.076457    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b15aab6c-7c83-472e-84a3-9a7637e0c1d2-config-volume\") pod \"coredns-7db6d8ff4d-x75kk\" (UID: \"b15aab6c-7c83-472e-84a3-9a7637e0c1d2\") " pod="kube-system/coredns-7db6d8ff4d-x75kk"
Jan 29 16:15:00.076627 kubelet[2621]: I0129 16:15:00.076502    2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9xr\" (UniqueName: \"kubernetes.io/projected/b15aab6c-7c83-472e-84a3-9a7637e0c1d2-kube-api-access-pl9xr\") pod \"coredns-7db6d8ff4d-x75kk\" (UID: \"b15aab6c-7c83-472e-84a3-9a7637e0c1d2\") " pod="kube-system/coredns-7db6d8ff4d-x75kk"
Jan 29 16:15:00.315666 containerd[1485]: time="2025-01-29T16:15:00.315613180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x75kk,Uid:b15aab6c-7c83-472e-84a3-9a7637e0c1d2,Namespace:kube-system,Attempt:0,}"
Jan 29 16:15:00.319372 containerd[1485]: time="2025-01-29T16:15:00.319327665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r9gxm,Uid:e8c0f224-5f18-453b-af3c-62f6c0800f6d,Namespace:kube-system,Attempt:0,}"
Jan 29 16:15:00.391775 containerd[1485]: time="2025-01-29T16:15:00.391714735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r9gxm,Uid:e8c0f224-5f18-453b-af3c-62f6c0800f6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ab00c83e27e9cd916814bd339469e433bf72f7b5f2eef9790c1c0d382e7fcc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 29 16:15:00.392035 kubelet[2621]: E0129 16:15:00.391981    2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ab00c83e27e9cd916814bd339469e433bf72f7b5f2eef9790c1c0d382e7fcc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 29 16:15:00.392085 kubelet[2621]: E0129 16:15:00.392055    2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ab00c83e27e9cd916814bd339469e433bf72f7b5f2eef9790c1c0d382e7fcc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r9gxm"
Jan 29 16:15:00.392085 kubelet[2621]: E0129 16:15:00.392075    2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ab00c83e27e9cd916814bd339469e433bf72f7b5f2eef9790c1c0d382e7fcc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r9gxm"
Jan 29 16:15:00.392381 kubelet[2621]: E0129 16:15:00.392124    2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r9gxm_kube-system(e8c0f224-5f18-453b-af3c-62f6c0800f6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r9gxm_kube-system(e8c0f224-5f18-453b-af3c-62f6c0800f6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ab00c83e27e9cd916814bd339469e433bf72f7b5f2eef9790c1c0d382e7fcc3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-r9gxm" podUID="e8c0f224-5f18-453b-af3c-62f6c0800f6d"
Jan 29 16:15:00.392540 containerd[1485]: time="2025-01-29T16:15:00.392261056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x75kk,Uid:b15aab6c-7c83-472e-84a3-9a7637e0c1d2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 29 16:15:00.392575 kubelet[2621]: E0129 16:15:00.392419    2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Jan 29 16:15:00.392575 kubelet[2621]: E0129 16:15:00.392444    2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-x75kk"
Jan 29 16:15:00.392575 kubelet[2621]: E0129 16:15:00.392458    2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-x75kk"
Jan 29 16:15:00.392575 kubelet[2621]: E0129 16:15:00.392483    2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x75kk_kube-system(b15aab6c-7c83-472e-84a3-9a7637e0c1d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x75kk_kube-system(b15aab6c-7c83-472e-84a3-9a7637e0c1d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-x75kk" podUID="b15aab6c-7c83-472e-84a3-9a7637e0c1d2"
Jan 29 16:15:00.828017 containerd[1485]: time="2025-01-29T16:15:00.827887995Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Jan 29 16:15:00.836622 containerd[1485]: time="2025-01-29T16:15:00.836518568Z" level=info msg="CreateContainer within sandbox \"29bee2cf50690c2c94a28d0268fa66bb048e6cc0faab542104e0a974591bced0\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ee985589fb795b5681bd0f19da1c20ed3d7768ec87af6677a808df92f4154e59\""
Jan 29 16:15:00.841362 containerd[1485]: time="2025-01-29T16:15:00.841303415Z" level=info msg="StartContainer for \"ee985589fb795b5681bd0f19da1c20ed3d7768ec87af6677a808df92f4154e59\""
Jan 29 16:15:00.854772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0245083160a1e926420d43d2690e7f164488d89eb167474510a03b27d990fbbe-shm.mount: Deactivated successfully.
Jan 29 16:15:00.867557 systemd[1]: Started cri-containerd-ee985589fb795b5681bd0f19da1c20ed3d7768ec87af6677a808df92f4154e59.scope - libcontainer container ee985589fb795b5681bd0f19da1c20ed3d7768ec87af6677a808df92f4154e59.
Jan 29 16:15:00.888337 containerd[1485]: time="2025-01-29T16:15:00.888267006Z" level=info msg="StartContainer for \"ee985589fb795b5681bd0f19da1c20ed3d7768ec87af6677a808df92f4154e59\" returns successfully"
Jan 29 16:15:01.992246 systemd-networkd[1408]: flannel.1: Link UP
Jan 29 16:15:01.992253 systemd-networkd[1408]: flannel.1: Gained carrier
Jan 29 16:15:03.690578 systemd-networkd[1408]: flannel.1: Gained IPv6LL
Jan 29 16:15:04.387029 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:40774.service - OpenSSH per-connection server daemon (10.0.0.1:40774).
Jan 29 16:15:04.435354 sshd[3280]: Accepted publickey for core from 10.0.0.1 port 40774 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:04.436784 sshd-session[3280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:04.441258 systemd-logind[1472]: New session 6 of user core.
Jan 29 16:15:04.450550 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 29 16:15:04.565423 sshd[3282]: Connection closed by 10.0.0.1 port 40774
Jan 29 16:15:04.565471 sshd-session[3280]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:04.568598 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:40774.service: Deactivated successfully.
Jan 29 16:15:04.570604 systemd[1]: session-6.scope: Deactivated successfully.
Jan 29 16:15:04.571343 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit.
Jan 29 16:15:04.572336 systemd-logind[1472]: Removed session 6.
Jan 29 16:15:09.580729 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782).
Jan 29 16:15:09.652702 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:09.654051 sshd-session[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:09.658502 systemd-logind[1472]: New session 7 of user core.
Jan 29 16:15:09.664562 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 29 16:15:09.774343 sshd[3323]: Connection closed by 10.0.0.1 port 40782
Jan 29 16:15:09.774281 sshd-session[3321]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:09.777666 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:40782.service: Deactivated successfully.
Jan 29 16:15:09.779430 systemd[1]: session-7.scope: Deactivated successfully.
Jan 29 16:15:09.780033 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit.
Jan 29 16:15:09.780810 systemd-logind[1472]: Removed session 7.
Jan 29 16:15:13.766962 containerd[1485]: time="2025-01-29T16:15:13.766904308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r9gxm,Uid:e8c0f224-5f18-453b-af3c-62f6c0800f6d,Namespace:kube-system,Attempt:0,}"
Jan 29 16:15:13.798183 systemd-networkd[1408]: cni0: Link UP
Jan 29 16:15:13.798189 systemd-networkd[1408]: cni0: Gained carrier
Jan 29 16:15:13.800890 systemd-networkd[1408]: cni0: Lost carrier
Jan 29 16:15:13.806568 systemd-networkd[1408]: veth8cc1d8d1: Link UP
Jan 29 16:15:13.809722 kernel: cni0: port 1(veth8cc1d8d1) entered blocking state
Jan 29 16:15:13.809870 kernel: cni0: port 1(veth8cc1d8d1) entered disabled state
Jan 29 16:15:13.809897 kernel: veth8cc1d8d1: entered allmulticast mode
Jan 29 16:15:13.809922 kernel: veth8cc1d8d1: entered promiscuous mode
Jan 29 16:15:13.810932 kernel: cni0: port 1(veth8cc1d8d1) entered blocking state
Jan 29 16:15:13.810967 kernel: cni0: port 1(veth8cc1d8d1) entered forwarding state
Jan 29 16:15:13.812484 kernel: cni0: port 1(veth8cc1d8d1) entered disabled state
Jan 29 16:15:13.819665 kernel: cni0: port 1(veth8cc1d8d1) entered blocking state
Jan 29 16:15:13.819760 kernel: cni0: port 1(veth8cc1d8d1) entered forwarding state
Jan 29 16:15:13.819865 systemd-networkd[1408]: veth8cc1d8d1: Gained carrier
Jan 29 16:15:13.820545 systemd-networkd[1408]: cni0: Gained carrier
Jan 29 16:15:13.821867 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"}
Jan 29 16:15:13.821867 containerd[1485]: delegateAdd: netconf sent to delegate plugin:
Jan 29 16:15:13.843632 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:15:13.843344118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:15:13.843632 containerd[1485]: time="2025-01-29T16:15:13.843434678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:15:13.843632 containerd[1485]: time="2025-01-29T16:15:13.843445878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:15:13.843632 containerd[1485]: time="2025-01-29T16:15:13.843533358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:15:13.866622 systemd[1]: Started cri-containerd-fb0f6e0668d53baf4c8d013ad470ce3e4f94d452006da305747844fefd558b30.scope - libcontainer container fb0f6e0668d53baf4c8d013ad470ce3e4f94d452006da305747844fefd558b30.
Jan 29 16:15:13.878499 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 16:15:13.895155 containerd[1485]: time="2025-01-29T16:15:13.895114312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r9gxm,Uid:e8c0f224-5f18-453b-af3c-62f6c0800f6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb0f6e0668d53baf4c8d013ad470ce3e4f94d452006da305747844fefd558b30\""
Jan 29 16:15:13.899884 containerd[1485]: time="2025-01-29T16:15:13.899848755Z" level=info msg="CreateContainer within sandbox \"fb0f6e0668d53baf4c8d013ad470ce3e4f94d452006da305747844fefd558b30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 29 16:15:13.915111 containerd[1485]: time="2025-01-29T16:15:13.915058125Z" level=info msg="CreateContainer within sandbox \"fb0f6e0668d53baf4c8d013ad470ce3e4f94d452006da305747844fefd558b30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fef6e15ca04200d53dc1ff24acbe709e81f75f4f6ba16f603aa5cc03a21844f\""
Jan 29 16:15:13.915651 containerd[1485]: time="2025-01-29T16:15:13.915621445Z" level=info msg="StartContainer for \"1fef6e15ca04200d53dc1ff24acbe709e81f75f4f6ba16f603aa5cc03a21844f\""
Jan 29 16:15:13.940630 systemd[1]: Started cri-containerd-1fef6e15ca04200d53dc1ff24acbe709e81f75f4f6ba16f603aa5cc03a21844f.scope - libcontainer container 1fef6e15ca04200d53dc1ff24acbe709e81f75f4f6ba16f603aa5cc03a21844f.
Jan 29 16:15:13.967536 containerd[1485]: time="2025-01-29T16:15:13.967489079Z" level=info msg="StartContainer for \"1fef6e15ca04200d53dc1ff24acbe709e81f75f4f6ba16f603aa5cc03a21844f\" returns successfully"
Jan 29 16:15:14.766913 containerd[1485]: time="2025-01-29T16:15:14.766847890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x75kk,Uid:b15aab6c-7c83-472e-84a3-9a7637e0c1d2,Namespace:kube-system,Attempt:0,}"
Jan 29 16:15:14.787665 kernel: cni0: port 2(veth2b308312) entered blocking state
Jan 29 16:15:14.787852 kernel: cni0: port 2(veth2b308312) entered disabled state
Jan 29 16:15:14.787876 kernel: veth2b308312: entered allmulticast mode
Jan 29 16:15:14.787600 systemd-networkd[1408]: veth2b308312: Link UP
Jan 29 16:15:14.789425 kernel: veth2b308312: entered promiscuous mode
Jan 29 16:15:14.789508 kernel: cni0: port 2(veth2b308312) entered blocking state
Jan 29 16:15:14.789525 kernel: cni0: port 2(veth2b308312) entered forwarding state
Jan 29 16:15:14.795562 systemd-networkd[1408]: veth2b308312: Gained carrier
Jan 29 16:15:14.798486 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"}
Jan 29 16:15:14.798486 containerd[1485]: delegateAdd: netconf sent to delegate plugin:
Jan 29 16:15:14.799734 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626).
Jan 29 16:15:14.817177 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:15:14.817075601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 16:15:14.817312 containerd[1485]: time="2025-01-29T16:15:14.817171681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 16:15:14.817312 containerd[1485]: time="2025-01-29T16:15:14.817184801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:15:14.817403 containerd[1485]: time="2025-01-29T16:15:14.817301521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 16:15:14.847568 systemd[1]: Started cri-containerd-880c93614ab62e7b4bafccf33c978068001952d21f5cfb1fd60b34fa20ed80ef.scope - libcontainer container 880c93614ab62e7b4bafccf33c978068001952d21f5cfb1fd60b34fa20ed80ef.
Jan 29 16:15:14.850025 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:14.853091 sshd-session[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:14.863375 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 16:15:14.865817 systemd-logind[1472]: New session 8 of user core.
Jan 29 16:15:14.872763 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 29 16:15:14.874833 kubelet[2621]: I0129 16:15:14.871877    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tw7kn" podStartSLOduration=15.752959091 podStartE2EDuration="19.871828595s" podCreationTimestamp="2025-01-29 16:14:55 +0000 UTC" firstStartedPulling="2025-01-29 16:14:55.698026103 +0000 UTC m=+16.021523770" lastFinishedPulling="2025-01-29 16:14:59.816895607 +0000 UTC m=+20.140393274" observedRunningTime="2025-01-29 16:15:01.840147887 +0000 UTC m=+22.163645554" watchObservedRunningTime="2025-01-29 16:15:14.871828595 +0000 UTC m=+35.195326262"
Jan 29 16:15:14.874833 kubelet[2621]: I0129 16:15:14.871994    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r9gxm" podStartSLOduration=19.871989595 podStartE2EDuration="19.871989595s" podCreationTimestamp="2025-01-29 16:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:14.871578155 +0000 UTC m=+35.195075822" watchObservedRunningTime="2025-01-29 16:15:14.871989595 +0000 UTC m=+35.195487262"
Jan 29 16:15:14.900999 containerd[1485]: time="2025-01-29T16:15:14.900672293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x75kk,Uid:b15aab6c-7c83-472e-84a3-9a7637e0c1d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"880c93614ab62e7b4bafccf33c978068001952d21f5cfb1fd60b34fa20ed80ef\""
Jan 29 16:15:14.904949 containerd[1485]: time="2025-01-29T16:15:14.904897575Z" level=info msg="CreateContainer within sandbox \"880c93614ab62e7b4bafccf33c978068001952d21f5cfb1fd60b34fa20ed80ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 29 16:15:14.918643 containerd[1485]: time="2025-01-29T16:15:14.918593183Z" level=info msg="CreateContainer within sandbox \"880c93614ab62e7b4bafccf33c978068001952d21f5cfb1fd60b34fa20ed80ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e644063ba957f45de882c2947019833382876f3f32d6d1f276dbd3c154afcfe\""
Jan 29 16:15:14.919113 containerd[1485]: time="2025-01-29T16:15:14.919086584Z" level=info msg="StartContainer for \"5e644063ba957f45de882c2947019833382876f3f32d6d1f276dbd3c154afcfe\""
Jan 29 16:15:14.958657 systemd[1]: Started cri-containerd-5e644063ba957f45de882c2947019833382876f3f32d6d1f276dbd3c154afcfe.scope - libcontainer container 5e644063ba957f45de882c2947019833382876f3f32d6d1f276dbd3c154afcfe.
Jan 29 16:15:14.985941 containerd[1485]: time="2025-01-29T16:15:14.985798905Z" level=info msg="StartContainer for \"5e644063ba957f45de882c2947019833382876f3f32d6d1f276dbd3c154afcfe\" returns successfully"
Jan 29 16:15:15.018523 sshd[3540]: Connection closed by 10.0.0.1 port 55626
Jan 29 16:15:15.016666 sshd-session[3501]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:15.028091 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:55626.service: Deactivated successfully.
Jan 29 16:15:15.030755 systemd[1]: session-8.scope: Deactivated successfully.
Jan 29 16:15:15.031990 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit.
Jan 29 16:15:15.040752 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:55632.service - OpenSSH per-connection server daemon (10.0.0.1:55632).
Jan 29 16:15:15.042848 systemd-logind[1472]: Removed session 8.
Jan 29 16:15:15.083203 sshd[3607]: Accepted publickey for core from 10.0.0.1 port 55632 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:15.084373 sshd-session[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:15.089078 systemd-logind[1472]: New session 9 of user core.
Jan 29 16:15:15.095601 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 29 16:15:15.211543 systemd-networkd[1408]: cni0: Gained IPv6LL
Jan 29 16:15:15.242738 sshd[3610]: Connection closed by 10.0.0.1 port 55632
Jan 29 16:15:15.243594 sshd-session[3607]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:15.265362 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:55646.service - OpenSSH per-connection server daemon (10.0.0.1:55646).
Jan 29 16:15:15.265996 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:55632.service: Deactivated successfully.
Jan 29 16:15:15.272595 systemd[1]: session-9.scope: Deactivated successfully.
Jan 29 16:15:15.275698 systemd-networkd[1408]: veth8cc1d8d1: Gained IPv6LL
Jan 29 16:15:15.275961 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit.
Jan 29 16:15:15.279487 systemd-logind[1472]: Removed session 9.
Jan 29 16:15:15.316516 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 55646 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:15.317737 sshd-session[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:15.322687 systemd-logind[1472]: New session 10 of user core.
Jan 29 16:15:15.333610 systemd[1]: Started session-10.scope - Session 10 of User core.
Jan 29 16:15:15.447439 sshd[3623]: Connection closed by 10.0.0.1 port 55646
Jan 29 16:15:15.447980 sshd-session[3618]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:15.450782 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:55646.service: Deactivated successfully.
Jan 29 16:15:15.452771 systemd[1]: session-10.scope: Deactivated successfully.
Jan 29 16:15:15.454436 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit.
Jan 29 16:15:15.455995 systemd-logind[1472]: Removed session 10.
Jan 29 16:15:15.791766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187104176.mount: Deactivated successfully.
Jan 29 16:15:16.106634 systemd-networkd[1408]: veth2b308312: Gained IPv6LL
Jan 29 16:15:20.329278 kubelet[2621]: I0129 16:15:20.327926    2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x75kk" podStartSLOduration=25.327906665 podStartE2EDuration="25.327906665s" podCreationTimestamp="2025-01-29 16:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:15:15.872999055 +0000 UTC m=+36.196496762" watchObservedRunningTime="2025-01-29 16:15:20.327906665 +0000 UTC m=+40.651404332"
Jan 29 16:15:20.460058 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:55650.service - OpenSSH per-connection server daemon (10.0.0.1:55650).
Jan 29 16:15:20.510421 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 55650 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:20.510910 sshd-session[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:20.515687 systemd-logind[1472]: New session 11 of user core.
Jan 29 16:15:20.524605 systemd[1]: Started session-11.scope - Session 11 of User core.
Jan 29 16:15:20.640329 sshd[3665]: Connection closed by 10.0.0.1 port 55650
Jan 29 16:15:20.640708 sshd-session[3663]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:20.653798 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:55650.service: Deactivated successfully.
Jan 29 16:15:20.657091 systemd[1]: session-11.scope: Deactivated successfully.
Jan 29 16:15:20.658004 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit.
Jan 29 16:15:20.665001 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:55666.service - OpenSSH per-connection server daemon (10.0.0.1:55666).
Jan 29 16:15:20.668331 systemd-logind[1472]: Removed session 11.
Jan 29 16:15:20.710172 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:20.711591 sshd-session[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:20.718456 systemd-logind[1472]: New session 12 of user core.
Jan 29 16:15:20.732488 systemd[1]: Started session-12.scope - Session 12 of User core.
Jan 29 16:15:20.931724 sshd[3680]: Connection closed by 10.0.0.1 port 55666
Jan 29 16:15:20.932533 sshd-session[3677]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:20.943783 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:55666.service: Deactivated successfully.
Jan 29 16:15:20.945290 systemd[1]: session-12.scope: Deactivated successfully.
Jan 29 16:15:20.946003 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit.
Jan 29 16:15:20.951798 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:55674.service - OpenSSH per-connection server daemon (10.0.0.1:55674).
Jan 29 16:15:20.952602 systemd-logind[1472]: Removed session 12.
Jan 29 16:15:20.992905 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 55674 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:20.994174 sshd-session[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:20.999363 systemd-logind[1472]: New session 13 of user core.
Jan 29 16:15:21.010517 systemd[1]: Started session-13.scope - Session 13 of User core.
Jan 29 16:15:22.126063 sshd[3693]: Connection closed by 10.0.0.1 port 55674
Jan 29 16:15:22.127687 sshd-session[3690]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:22.140668 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:55674.service: Deactivated successfully.
Jan 29 16:15:22.142329 systemd[1]: session-13.scope: Deactivated successfully.
Jan 29 16:15:22.144169 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit.
Jan 29 16:15:22.152469 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676).
Jan 29 16:15:22.154201 systemd-logind[1472]: Removed session 13.
Jan 29 16:15:22.194981 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:22.196247 sshd-session[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:22.200480 systemd-logind[1472]: New session 14 of user core.
Jan 29 16:15:22.211652 systemd[1]: Started session-14.scope - Session 14 of User core.
Jan 29 16:15:22.442493 sshd[3735]: Connection closed by 10.0.0.1 port 55676
Jan 29 16:15:22.441188 sshd-session[3719]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:22.454643 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:55676.service: Deactivated successfully.
Jan 29 16:15:22.460033 systemd[1]: session-14.scope: Deactivated successfully.
Jan 29 16:15:22.462289 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit.
Jan 29 16:15:22.468710 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:55684.service - OpenSSH per-connection server daemon (10.0.0.1:55684).
Jan 29 16:15:22.470751 systemd-logind[1472]: Removed session 14.
Jan 29 16:15:22.522652 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 55684 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:22.524034 sshd-session[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:22.528192 systemd-logind[1472]: New session 15 of user core.
Jan 29 16:15:22.534565 systemd[1]: Started session-15.scope - Session 15 of User core.
Jan 29 16:15:22.643684 sshd[3749]: Connection closed by 10.0.0.1 port 55684
Jan 29 16:15:22.644087 sshd-session[3746]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:22.647491 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:55684.service: Deactivated successfully.
Jan 29 16:15:22.649302 systemd[1]: session-15.scope: Deactivated successfully.
Jan 29 16:15:22.650575 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit.
Jan 29 16:15:22.651641 systemd-logind[1472]: Removed session 15.
Jan 29 16:15:27.656137 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:34962.service - OpenSSH per-connection server daemon (10.0.0.1:34962).
Jan 29 16:15:27.700410 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:27.701682 sshd-session[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:27.705504 systemd-logind[1472]: New session 16 of user core.
Jan 29 16:15:27.717603 systemd[1]: Started session-16.scope - Session 16 of User core.
Jan 29 16:15:27.820367 sshd[3790]: Connection closed by 10.0.0.1 port 34962
Jan 29 16:15:27.820889 sshd-session[3788]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:27.824511 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:34962.service: Deactivated successfully.
Jan 29 16:15:27.827002 systemd[1]: session-16.scope: Deactivated successfully.
Jan 29 16:15:27.827832 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit.
Jan 29 16:15:27.828565 systemd-logind[1472]: Removed session 16.
Jan 29 16:15:32.831583 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:41928.service - OpenSSH per-connection server daemon (10.0.0.1:41928).
Jan 29 16:15:32.874583 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 41928 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:32.875684 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:32.879453 systemd-logind[1472]: New session 17 of user core.
Jan 29 16:15:32.888566 systemd[1]: Started session-17.scope - Session 17 of User core.
Jan 29 16:15:32.991951 sshd[3827]: Connection closed by 10.0.0.1 port 41928
Jan 29 16:15:32.992271 sshd-session[3825]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:32.995356 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:41928.service: Deactivated successfully.
Jan 29 16:15:32.997175 systemd[1]: session-17.scope: Deactivated successfully.
Jan 29 16:15:32.998938 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit.
Jan 29 16:15:33.000082 systemd-logind[1472]: Removed session 17.
Jan 29 16:15:38.003755 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:41932.service - OpenSSH per-connection server daemon (10.0.0.1:41932).
Jan 29 16:15:38.047596 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 41932 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU
Jan 29 16:15:38.048718 sshd-session[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 16:15:38.053357 systemd-logind[1472]: New session 18 of user core.
Jan 29 16:15:38.065588 systemd[1]: Started session-18.scope - Session 18 of User core.
Jan 29 16:15:38.181462 sshd[3864]: Connection closed by 10.0.0.1 port 41932
Jan 29 16:15:38.181933 sshd-session[3862]: pam_unix(sshd:session): session closed for user core
Jan 29 16:15:38.184556 systemd[1]: session-18.scope: Deactivated successfully.
Jan 29 16:15:38.185752 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:41932.service: Deactivated successfully.
Jan 29 16:15:38.187710 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit.
Jan 29 16:15:38.188395 systemd-logind[1472]: Removed session 18.