Dec 13 01:16:47.902562 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Dec 13 01:16:47.902584 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024
Dec 13 01:16:47.902594 kernel: KASLR enabled
Dec 13 01:16:47.902600 kernel: efi: EFI v2.7 by EDK II
Dec 13 01:16:47.902605 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 
Dec 13 01:16:47.902611 kernel: random: crng init done
Dec 13 01:16:47.902618 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:16:47.902624 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS )
Dec 13 01:16:47.902630 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS  BXPC     00000001      01000013)
Dec 13 01:16:47.902637 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902643 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902649 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902655 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902661 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902673 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902681 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902688 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902694 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:16:47.902700 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Dec 13 01:16:47.902706 kernel: NUMA: Failed to initialise from firmware
Dec 13 01:16:47.902713 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Dec 13 01:16:47.902719 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff]
Dec 13 01:16:47.902725 kernel: Zone ranges:
Dec 13 01:16:47.902731 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Dec 13 01:16:47.902737 kernel:   DMA32    empty
Dec 13 01:16:47.902745 kernel:   Normal   empty
Dec 13 01:16:47.902751 kernel: Movable zone start for each node
Dec 13 01:16:47.902757 kernel: Early memory node ranges
Dec 13 01:16:47.902764 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Dec 13 01:16:47.902770 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Dec 13 01:16:47.902776 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Dec 13 01:16:47.902782 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Dec 13 01:16:47.902788 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Dec 13 01:16:47.902794 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Dec 13 01:16:47.902800 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Dec 13 01:16:47.902807 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Dec 13 01:16:47.902813 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Dec 13 01:16:47.902820 kernel: psci: probing for conduit method from ACPI.
Dec 13 01:16:47.902827 kernel: psci: PSCIv1.1 detected in firmware.
Dec 13 01:16:47.902833 kernel: psci: Using standard PSCI v0.2 function IDs
Dec 13 01:16:47.902842 kernel: psci: Trusted OS migration not required
Dec 13 01:16:47.902848 kernel: psci: SMC Calling Convention v1.1
Dec 13 01:16:47.902855 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Dec 13 01:16:47.902863 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Dec 13 01:16:47.902870 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Dec 13 01:16:47.902877 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Dec 13 01:16:47.902883 kernel: Detected PIPT I-cache on CPU0
Dec 13 01:16:47.902890 kernel: CPU features: detected: GIC system register CPU interface
Dec 13 01:16:47.902897 kernel: CPU features: detected: Hardware dirty bit management
Dec 13 01:16:47.902903 kernel: CPU features: detected: Spectre-v4
Dec 13 01:16:47.902910 kernel: CPU features: detected: Spectre-BHB
Dec 13 01:16:47.902916 kernel: CPU features: kernel page table isolation forced ON by KASLR
Dec 13 01:16:47.902923 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Dec 13 01:16:47.902931 kernel: CPU features: detected: ARM erratum 1418040
Dec 13 01:16:47.902938 kernel: CPU features: detected: SSBS not fully self-synchronizing
Dec 13 01:16:47.902944 kernel: alternatives: applying boot alternatives
Dec 13 01:16:47.902952 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6
Dec 13 01:16:47.902959 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 01:16:47.902966 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 01:16:47.902972 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 01:16:47.902979 kernel: Fallback order for Node 0: 0 
Dec 13 01:16:47.902986 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Dec 13 01:16:47.902994 kernel: Policy zone: DMA
Dec 13 01:16:47.903001 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:16:47.903009 kernel: software IO TLB: area num 4.
Dec 13 01:16:47.903016 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Dec 13 01:16:47.903023 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved)
Dec 13 01:16:47.903030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Dec 13 01:16:47.903036 kernel: trace event string verifier disabled
Dec 13 01:16:47.903043 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 01:16:47.903050 kernel: rcu:         RCU event tracing is enabled.
Dec 13 01:16:47.903057 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Dec 13 01:16:47.903064 kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 01:16:47.903071 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 01:16:47.903078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:16:47.903085 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Dec 13 01:16:47.903093 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Dec 13 01:16:47.903099 kernel: GICv3: 256 SPIs implemented
Dec 13 01:16:47.903106 kernel: GICv3: 0 Extended SPIs implemented
Dec 13 01:16:47.903113 kernel: Root IRQ handler: gic_handle_irq
Dec 13 01:16:47.903119 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Dec 13 01:16:47.903126 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Dec 13 01:16:47.903133 kernel: ITS [mem 0x08080000-0x0809ffff]
Dec 13 01:16:47.903140 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Dec 13 01:16:47.903147 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Dec 13 01:16:47.903153 kernel: GICv3: using LPI property table @0x00000000400f0000
Dec 13 01:16:47.903160 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Dec 13 01:16:47.903168 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 01:16:47.903175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Dec 13 01:16:47.903182 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Dec 13 01:16:47.903189 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Dec 13 01:16:47.903196 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Dec 13 01:16:47.903202 kernel: arm-pv: using stolen time PV
Dec 13 01:16:47.903215 kernel: Console: colour dummy device 80x25
Dec 13 01:16:47.903223 kernel: ACPI: Core revision 20230628
Dec 13 01:16:47.903230 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Dec 13 01:16:47.903237 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:16:47.903245 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Dec 13 01:16:47.903252 kernel: landlock: Up and running.
Dec 13 01:16:47.903259 kernel: SELinux:  Initializing.
Dec 13 01:16:47.903266 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Dec 13 01:16:47.903273 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Dec 13 01:16:47.903280 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 01:16:47.903287 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Dec 13 01:16:47.903294 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:16:47.903301 kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 01:16:47.903308 kernel: Platform MSI: ITS@0x8080000 domain created
Dec 13 01:16:47.903315 kernel: PCI/MSI: ITS@0x8080000 domain created
Dec 13 01:16:47.903322 kernel: Remapping and enabling EFI services.
Dec 13 01:16:47.903329 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:16:47.903336 kernel: Detected PIPT I-cache on CPU1
Dec 13 01:16:47.903343 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Dec 13 01:16:47.903350 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Dec 13 01:16:47.903357 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Dec 13 01:16:47.903363 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Dec 13 01:16:47.903370 kernel: Detected PIPT I-cache on CPU2
Dec 13 01:16:47.903378 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Dec 13 01:16:47.903385 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Dec 13 01:16:47.903397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Dec 13 01:16:47.903407 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Dec 13 01:16:47.903414 kernel: Detected PIPT I-cache on CPU3
Dec 13 01:16:47.903421 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Dec 13 01:16:47.903428 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Dec 13 01:16:47.903435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Dec 13 01:16:47.903443 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Dec 13 01:16:47.903451 kernel: smp: Brought up 1 node, 4 CPUs
Dec 13 01:16:47.903458 kernel: SMP: Total of 4 processors activated.
Dec 13 01:16:47.903481 kernel: CPU features: detected: 32-bit EL0 Support
Dec 13 01:16:47.903489 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Dec 13 01:16:47.903496 kernel: CPU features: detected: Common not Private translations
Dec 13 01:16:47.903503 kernel: CPU features: detected: CRC32 instructions
Dec 13 01:16:47.903510 kernel: CPU features: detected: Enhanced Virtualization Traps
Dec 13 01:16:47.903518 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Dec 13 01:16:47.903527 kernel: CPU features: detected: LSE atomic instructions
Dec 13 01:16:47.903534 kernel: CPU features: detected: Privileged Access Never
Dec 13 01:16:47.903541 kernel: CPU features: detected: RAS Extension Support
Dec 13 01:16:47.903549 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Dec 13 01:16:47.903556 kernel: CPU: All CPU(s) started at EL1
Dec 13 01:16:47.903563 kernel: alternatives: applying system-wide alternatives
Dec 13 01:16:47.903570 kernel: devtmpfs: initialized
Dec 13 01:16:47.903578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:16:47.903585 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Dec 13 01:16:47.903593 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:16:47.903601 kernel: SMBIOS 3.0.0 present.
Dec 13 01:16:47.903608 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023
Dec 13 01:16:47.903615 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:16:47.903623 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Dec 13 01:16:47.903630 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 01:16:47.903637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 01:16:47.903645 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:16:47.903652 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1
Dec 13 01:16:47.903660 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:16:47.903668 kernel: cpuidle: using governor menu
Dec 13 01:16:47.903675 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Dec 13 01:16:47.903682 kernel: ASID allocator initialised with 32768 entries
Dec 13 01:16:47.903689 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:16:47.903697 kernel: Serial: AMBA PL011 UART driver
Dec 13 01:16:47.903704 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Dec 13 01:16:47.903711 kernel: Modules: 0 pages in range for non-PLT usage
Dec 13 01:16:47.903718 kernel: Modules: 509040 pages in range for PLT usage
Dec 13 01:16:47.903727 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:16:47.903734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 01:16:47.903741 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Dec 13 01:16:47.903749 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Dec 13 01:16:47.903756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:16:47.903763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 01:16:47.903771 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Dec 13 01:16:47.903778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Dec 13 01:16:47.903785 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:16:47.903793 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:16:47.903800 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:16:47.903807 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:16:47.903815 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 01:16:47.903822 kernel: ACPI: Interpreter enabled
Dec 13 01:16:47.903829 kernel: ACPI: Using GIC for interrupt routing
Dec 13 01:16:47.903836 kernel: ACPI: MCFG table detected, 1 entries
Dec 13 01:16:47.903843 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Dec 13 01:16:47.903850 kernel: printk: console [ttyAMA0] enabled
Dec 13 01:16:47.903859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 01:16:47.903993 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 01:16:47.904069 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Dec 13 01:16:47.904137 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Dec 13 01:16:47.904215 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Dec 13 01:16:47.904294 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Dec 13 01:16:47.904307 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Dec 13 01:16:47.904318 kernel: PCI host bridge to bus 0000:00
Dec 13 01:16:47.904425 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Dec 13 01:16:47.904542 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Dec 13 01:16:47.904609 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Dec 13 01:16:47.904681 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 01:16:47.904762 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Dec 13 01:16:47.904843 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Dec 13 01:16:47.904919 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Dec 13 01:16:47.904988 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Dec 13 01:16:47.905056 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Dec 13 01:16:47.905124 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Dec 13 01:16:47.905192 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Dec 13 01:16:47.905272 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Dec 13 01:16:47.905336 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Dec 13 01:16:47.905403 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Dec 13 01:16:47.905480 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Dec 13 01:16:47.905492 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Dec 13 01:16:47.905500 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Dec 13 01:16:47.905508 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Dec 13 01:16:47.905515 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Dec 13 01:16:47.905523 kernel: iommu: Default domain type: Translated
Dec 13 01:16:47.905531 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Dec 13 01:16:47.905541 kernel: efivars: Registered efivars operations
Dec 13 01:16:47.905549 kernel: vgaarb: loaded
Dec 13 01:16:47.905556 kernel: clocksource: Switched to clocksource arch_sys_counter
Dec 13 01:16:47.905563 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:16:47.905571 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:16:47.905578 kernel: pnp: PnP ACPI init
Dec 13 01:16:47.905658 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Dec 13 01:16:47.905670 kernel: pnp: PnP ACPI: found 1 devices
Dec 13 01:16:47.905679 kernel: NET: Registered PF_INET protocol family
Dec 13 01:16:47.905686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:16:47.905694 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Dec 13 01:16:47.905701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:16:47.905709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 01:16:47.905716 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Dec 13 01:16:47.905723 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Dec 13 01:16:47.905731 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Dec 13 01:16:47.905738 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Dec 13 01:16:47.905747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:16:47.905755 kernel: PCI: CLS 0 bytes, default 64
Dec 13 01:16:47.905762 kernel: kvm [1]: HYP mode not available
Dec 13 01:16:47.905769 kernel: Initialise system trusted keyrings
Dec 13 01:16:47.905777 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Dec 13 01:16:47.905784 kernel: Key type asymmetric registered
Dec 13 01:16:47.905792 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:16:47.905799 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Dec 13 01:16:47.905806 kernel: io scheduler mq-deadline registered
Dec 13 01:16:47.905815 kernel: io scheduler kyber registered
Dec 13 01:16:47.905823 kernel: io scheduler bfq registered
Dec 13 01:16:47.905830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Dec 13 01:16:47.905837 kernel: ACPI: button: Power Button [PWRB]
Dec 13 01:16:47.905845 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Dec 13 01:16:47.905916 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Dec 13 01:16:47.905926 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:16:47.905933 kernel: thunder_xcv, ver 1.0
Dec 13 01:16:47.905940 kernel: thunder_bgx, ver 1.0
Dec 13 01:16:47.905949 kernel: nicpf, ver 1.0
Dec 13 01:16:47.905957 kernel: nicvf, ver 1.0
Dec 13 01:16:47.906036 kernel: rtc-efi rtc-efi.0: registered as rtc0
Dec 13 01:16:47.906103 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:16:47 UTC (1734052607)
Dec 13 01:16:47.906113 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:16:47.906121 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Dec 13 01:16:47.906128 kernel: watchdog: Delayed init of the lockup detector failed: -19
Dec 13 01:16:47.906136 kernel: watchdog: Hard watchdog permanently disabled
Dec 13 01:16:47.906145 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:16:47.906153 kernel: Segment Routing with IPv6
Dec 13 01:16:47.906160 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 01:16:47.906168 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:16:47.906175 kernel: Key type dns_resolver registered
Dec 13 01:16:47.906182 kernel: registered taskstats version 1
Dec 13 01:16:47.906190 kernel: Loading compiled-in X.509 certificates
Dec 13 01:16:47.906197 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb'
Dec 13 01:16:47.906205 kernel: Key type .fscrypt registered
Dec 13 01:16:47.906221 kernel: Key type fscrypt-provisioning registered
Dec 13 01:16:47.906229 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 01:16:47.906236 kernel: ima: Allocated hash algorithm: sha1
Dec 13 01:16:47.906244 kernel: ima: No architecture policies found
Dec 13 01:16:47.906252 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Dec 13 01:16:47.906259 kernel: clk: Disabling unused clocks
Dec 13 01:16:47.906266 kernel: Freeing unused kernel memory: 39360K
Dec 13 01:16:47.906273 kernel: Run /init as init process
Dec 13 01:16:47.906284 kernel:   with arguments:
Dec 13 01:16:47.906294 kernel:     /init
Dec 13 01:16:47.906301 kernel:   with environment:
Dec 13 01:16:47.906309 kernel:     HOME=/
Dec 13 01:16:47.906316 kernel:     TERM=linux
Dec 13 01:16:47.906324 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 01:16:47.906333 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 01:16:47.906343 systemd[1]: Detected virtualization kvm.
Dec 13 01:16:47.906351 systemd[1]: Detected architecture arm64.
Dec 13 01:16:47.906360 systemd[1]: Running in initrd.
Dec 13 01:16:47.906368 systemd[1]: No hostname configured, using default hostname.
Dec 13 01:16:47.906375 systemd[1]: Hostname set to <localhost>.
Dec 13 01:16:47.906384 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 01:16:47.906392 systemd[1]: Queued start job for default target initrd.target.
Dec 13 01:16:47.906400 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:16:47.906408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:16:47.906417 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Dec 13 01:16:47.906426 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 01:16:47.906435 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Dec 13 01:16:47.906443 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Dec 13 01:16:47.906453 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Dec 13 01:16:47.906470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Dec 13 01:16:47.906479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:16:47.906487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:16:47.906497 systemd[1]: Reached target paths.target - Path Units.
Dec 13 01:16:47.906505 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 01:16:47.906513 systemd[1]: Reached target swap.target - Swaps.
Dec 13 01:16:47.906521 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 01:16:47.906529 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:16:47.906537 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:16:47.906545 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Dec 13 01:16:47.906553 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Dec 13 01:16:47.906563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:16:47.906571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:16:47.906579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:16:47.906587 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 01:16:47.906595 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Dec 13 01:16:47.906603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 01:16:47.906611 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Dec 13 01:16:47.906619 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 01:16:47.906627 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 01:16:47.906637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 01:16:47.906645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:16:47.906653 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Dec 13 01:16:47.906661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:16:47.906669 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 01:16:47.906678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Dec 13 01:16:47.906688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Dec 13 01:16:47.906697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 01:16:47.906705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:16:47.906733 systemd-journald[237]: Collecting audit messages is disabled.
Dec 13 01:16:47.906755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:16:47.906764 systemd-journald[237]: Journal started
Dec 13 01:16:47.906783 systemd-journald[237]: Runtime Journal (/run/log/journal/f062045c02b0435e9093e5ce98ca94e0) is 5.9M, max 47.3M, 41.4M free.
Dec 13 01:16:47.887308 systemd-modules-load[239]: Inserted module 'overlay'
Dec 13 01:16:47.910909 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 01:16:47.910930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 01:16:47.912149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:16:47.915348 kernel: Bridge firewalling registered
Dec 13 01:16:47.913913 systemd-modules-load[239]: Inserted module 'br_netfilter'
Dec 13 01:16:47.914922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:16:47.918318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 01:16:47.920188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 01:16:47.929250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:16:47.930911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:16:47.932704 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:16:47.950644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Dec 13 01:16:47.952933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 01:16:47.960850 dracut-cmdline[275]: dracut-dracut-053
Dec 13 01:16:47.963265 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6
Dec 13 01:16:47.978678 systemd-resolved[277]: Positive Trust Anchors:
Dec 13 01:16:47.978695 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:16:47.978726 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 01:16:47.983415 systemd-resolved[277]: Defaulting to hostname 'linux'.
Dec 13 01:16:47.984412 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 01:16:47.987736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:16:48.033492 kernel: SCSI subsystem initialized
Dec 13 01:16:48.038482 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 01:16:48.045489 kernel: iscsi: registered transport (tcp)
Dec 13 01:16:48.058490 kernel: iscsi: registered transport (qla4xxx)
Dec 13 01:16:48.058505 kernel: QLogic iSCSI HBA Driver
Dec 13 01:16:48.103145 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:16:48.114610 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Dec 13 01:16:48.137394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:16:48.137433 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:16:48.139124 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Dec 13 01:16:48.185497 kernel: raid6: neonx8   gen() 15728 MB/s
Dec 13 01:16:48.202489 kernel: raid6: neonx4   gen() 15644 MB/s
Dec 13 01:16:48.219486 kernel: raid6: neonx2   gen() 13278 MB/s
Dec 13 01:16:48.236485 kernel: raid6: neonx1   gen() 10485 MB/s
Dec 13 01:16:48.253479 kernel: raid6: int64x8  gen()  6922 MB/s
Dec 13 01:16:48.270494 kernel: raid6: int64x4  gen()  7315 MB/s
Dec 13 01:16:48.287490 kernel: raid6: int64x2  gen()  6123 MB/s
Dec 13 01:16:48.304694 kernel: raid6: int64x1  gen()  5047 MB/s
Dec 13 01:16:48.304721 kernel: raid6: using algorithm neonx8 gen() 15728 MB/s
Dec 13 01:16:48.322654 kernel: raid6: .... xor() 11934 MB/s, rmw enabled
Dec 13 01:16:48.322694 kernel: raid6: using neon recovery algorithm
Dec 13 01:16:48.327486 kernel: xor: measuring software checksum speed
Dec 13 01:16:48.328807 kernel:    8regs           : 17477 MB/sec
Dec 13 01:16:48.328822 kernel:    32regs          : 18951 MB/sec
Dec 13 01:16:48.329507 kernel:    arm64_neon      : 27025 MB/sec
Dec 13 01:16:48.329525 kernel: xor: using function: arm64_neon (27025 MB/sec)
Dec 13 01:16:48.379483 kernel: Btrfs loaded, zoned=no, fsverity=no
Dec 13 01:16:48.389988 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:16:48.398609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:16:48.412171 systemd-udevd[460]: Using default interface naming scheme 'v255'.
Dec 13 01:16:48.415856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:16:48.424617 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Dec 13 01:16:48.435450 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation
Dec 13 01:16:48.461695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:16:48.470701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 01:16:48.508918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:16:48.516630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Dec 13 01:16:48.529416 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:16:48.531478 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:16:48.533199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:16:48.535740 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 01:16:48.547899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Dec 13 01:16:48.558239 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Dec 13 01:16:48.573455 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Dec 13 01:16:48.573569 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 01:16:48.573587 kernel: GPT:9289727 != 19775487
Dec 13 01:16:48.573597 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 01:16:48.573606 kernel: GPT:9289727 != 19775487
Dec 13 01:16:48.573616 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 01:16:48.573626 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 01:16:48.565049 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:16:48.568717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:16:48.568828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:16:48.571682 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:16:48.572789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:16:48.572928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:16:48.574028 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:16:48.583687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:16:48.595105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:16:48.597498 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511)
Dec 13 01:16:48.599844 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (517)
Dec 13 01:16:48.603645 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Dec 13 01:16:48.608470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Dec 13 01:16:48.616215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Dec 13 01:16:48.620434 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Dec 13 01:16:48.621619 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Dec 13 01:16:48.632594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Dec 13 01:16:48.634400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:16:48.640194 disk-uuid[552]: Primary Header is updated.
Dec 13 01:16:48.640194 disk-uuid[552]: Secondary Entries is updated.
Dec 13 01:16:48.640194 disk-uuid[552]: Secondary Header is updated.
Dec 13 01:16:48.644796 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 01:16:48.657948 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:16:49.659487 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Dec 13 01:16:49.659768 disk-uuid[554]: The operation has completed successfully.
Dec 13 01:16:49.677455 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 01:16:49.677559 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Dec 13 01:16:49.702623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Dec 13 01:16:49.705359 sh[575]: Success
Dec 13 01:16:49.718483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Dec 13 01:16:49.747642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Dec 13 01:16:49.758837 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Dec 13 01:16:49.760685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Dec 13 01:16:49.771104 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881
Dec 13 01:16:49.771149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Dec 13 01:16:49.771160 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Dec 13 01:16:49.773044 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Dec 13 01:16:49.773061 kernel: BTRFS info (device dm-0): using free space tree
Dec 13 01:16:49.776774 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Dec 13 01:16:49.778096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Dec 13 01:16:49.787614 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Dec 13 01:16:49.789663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Dec 13 01:16:49.796296 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd
Dec 13 01:16:49.796331 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Dec 13 01:16:49.796342 kernel: BTRFS info (device vda6): using free space tree
Dec 13 01:16:49.799624 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 01:16:49.806708 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 01:16:49.809511 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd
Dec 13 01:16:49.813869 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Dec 13 01:16:49.820915 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Dec 13 01:16:49.884452 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:16:49.895676 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 01:16:49.918458 ignition[668]: Ignition 2.19.0
Dec 13 01:16:49.918475 ignition[668]: Stage: fetch-offline
Dec 13 01:16:49.918508 ignition[668]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:49.918516 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:49.918662 ignition[668]: parsed url from cmdline: ""
Dec 13 01:16:49.918665 ignition[668]: no config URL provided
Dec 13 01:16:49.918669 ignition[668]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 01:16:49.918675 ignition[668]: no config at "/usr/lib/ignition/user.ign"
Dec 13 01:16:49.918697 ignition[668]: op(1): [started]  loading QEMU firmware config module
Dec 13 01:16:49.918702 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg"
Dec 13 01:16:49.926323 ignition[668]: op(1): [finished] loading QEMU firmware config module
Dec 13 01:16:49.926342 ignition[668]: QEMU firmware config was not found. Ignoring...
Dec 13 01:16:49.927505 systemd-networkd[765]: lo: Link UP
Dec 13 01:16:49.927508 systemd-networkd[765]: lo: Gained carrier
Dec 13 01:16:49.928113 systemd-networkd[765]: Enumeration completed
Dec 13 01:16:49.928247 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 01:16:49.928516 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 01:16:49.928519 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:16:49.929319 systemd-networkd[765]: eth0: Link UP
Dec 13 01:16:49.929322 systemd-networkd[765]: eth0: Gained carrier
Dec 13 01:16:49.929327 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 01:16:49.930002 systemd[1]: Reached target network.target - Network.
Dec 13 01:16:49.949520 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1
Dec 13 01:16:49.959216 ignition[668]: parsing config with SHA512: 4fc646aec3285f25a336661eeb591b42e4a485aaf3b9825c11a039dea04110c7d23ffbb5e1873d6dc2df5040257d91e07e387a7d77164e3abf0bba473e4a9a87
Dec 13 01:16:49.963816 unknown[668]: fetched base config from "system"
Dec 13 01:16:49.963830 unknown[668]: fetched user config from "qemu"
Dec 13 01:16:49.964364 ignition[668]: fetch-offline: fetch-offline passed
Dec 13 01:16:49.965944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:16:49.964442 ignition[668]: Ignition finished successfully
Dec 13 01:16:49.967555 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Dec 13 01:16:49.979603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Dec 13 01:16:49.989267 ignition[772]: Ignition 2.19.0
Dec 13 01:16:49.989277 ignition[772]: Stage: kargs
Dec 13 01:16:49.989430 ignition[772]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:49.989438 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:49.990307 ignition[772]: kargs: kargs passed
Dec 13 01:16:49.993836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Dec 13 01:16:49.990346 ignition[772]: Ignition finished successfully
Dec 13 01:16:50.005604 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Dec 13 01:16:50.014225 ignition[780]: Ignition 2.19.0
Dec 13 01:16:50.014234 ignition[780]: Stage: disks
Dec 13 01:16:50.014385 ignition[780]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:50.016821 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Dec 13 01:16:50.014393 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:50.018418 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Dec 13 01:16:50.015209 ignition[780]: disks: disks passed
Dec 13 01:16:50.020105 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Dec 13 01:16:50.015251 ignition[780]: Ignition finished successfully
Dec 13 01:16:50.022067 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 01:16:50.023855 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 01:16:50.025284 systemd[1]: Reached target basic.target - Basic System.
Dec 13 01:16:50.039606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Dec 13 01:16:50.048523 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Dec 13 01:16:50.052841 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Dec 13 01:16:50.065556 systemd[1]: Mounting sysroot.mount - /sysroot...
Dec 13 01:16:50.105486 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none.
Dec 13 01:16:50.105959 systemd[1]: Mounted sysroot.mount - /sysroot.
Dec 13 01:16:50.107138 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Dec 13 01:16:50.115544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:16:50.117088 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Dec 13 01:16:50.118578 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Dec 13 01:16:50.118614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 01:16:50.125730 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798)
Dec 13 01:16:50.125757 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd
Dec 13 01:16:50.118634 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:16:50.131208 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Dec 13 01:16:50.131230 kernel: BTRFS info (device vda6): using free space tree
Dec 13 01:16:50.131240 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 01:16:50.122738 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Dec 13 01:16:50.124424 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Dec 13 01:16:50.132617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:16:50.165887 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 01:16:50.169695 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory
Dec 13 01:16:50.172750 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 01:16:50.176177 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 01:16:50.242237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Dec 13 01:16:50.255573 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Dec 13 01:16:50.257783 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Dec 13 01:16:50.262479 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd
Dec 13 01:16:50.276553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Dec 13 01:16:50.278321 ignition[911]: INFO     : Ignition 2.19.0
Dec 13 01:16:50.278321 ignition[911]: INFO     : Stage: mount
Dec 13 01:16:50.278321 ignition[911]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:50.278321 ignition[911]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:50.283422 ignition[911]: INFO     : mount: mount passed
Dec 13 01:16:50.283422 ignition[911]: INFO     : Ignition finished successfully
Dec 13 01:16:50.280268 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Dec 13 01:16:50.289558 systemd[1]: Starting ignition-files.service - Ignition (files)...
Dec 13 01:16:50.769917 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Dec 13 01:16:50.778702 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:16:50.785271 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924)
Dec 13 01:16:50.785301 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd
Dec 13 01:16:50.785313 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Dec 13 01:16:50.786896 kernel: BTRFS info (device vda6): using free space tree
Dec 13 01:16:50.789486 kernel: BTRFS info (device vda6): auto enabling async discard
Dec 13 01:16:50.790130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:16:50.805980 ignition[941]: INFO     : Ignition 2.19.0
Dec 13 01:16:50.805980 ignition[941]: INFO     : Stage: files
Dec 13 01:16:50.807729 ignition[941]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:50.807729 ignition[941]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:50.807729 ignition[941]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 01:16:50.811043 ignition[941]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 01:16:50.811043 ignition[941]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 01:16:50.811043 ignition[941]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 01:16:50.811043 ignition[941]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 01:16:50.811043 ignition[941]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 01:16:50.810055 unknown[941]: wrote ssh authorized keys file for user: core
Dec 13 01:16:50.818295 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Dec 13 01:16:50.818295 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Dec 13 01:16:51.065004 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 01:16:51.246656 systemd-networkd[765]: eth0: Gained IPv6LL
Dec 13 01:16:51.254987 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Dec 13 01:16:51.257084 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1
Dec 13 01:16:51.514311 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Dec 13 01:16:51.900013 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Dec 13 01:16:51.900013 ignition[941]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Dec 13 01:16:51.903700 ignition[941]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Dec 13 01:16:51.924300 ignition[941]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 01:16:51.927765 ignition[941]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:16:51.929227 ignition[941]: INFO     : files: files passed
Dec 13 01:16:51.929227 ignition[941]: INFO     : Ignition finished successfully
Dec 13 01:16:51.930689 systemd[1]: Finished ignition-files.service - Ignition (files).
Dec 13 01:16:51.941639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Dec 13 01:16:51.943529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Dec 13 01:16:51.946379 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 01:16:51.946475 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Dec 13 01:16:51.951814 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory
Dec 13 01:16:51.955388 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:16:51.955388 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:16:51.958402 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:16:51.958581 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:16:51.961442 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Dec 13 01:16:51.971658 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Dec 13 01:16:51.990990 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:16:51.991125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Dec 13 01:16:51.993284 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Dec 13 01:16:51.995138 systemd[1]: Reached target initrd.target - Initrd Default Target.
Dec 13 01:16:51.996937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Dec 13 01:16:52.004609 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Dec 13 01:16:52.015874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:16:52.018228 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Dec 13 01:16:52.029682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:16:52.030869 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:16:52.032960 systemd[1]: Stopped target timers.target - Timer Units.
Dec 13 01:16:52.034689 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:16:52.034801 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:16:52.037170 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Dec 13 01:16:52.038236 systemd[1]: Stopped target basic.target - Basic System.
Dec 13 01:16:52.040009 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Dec 13 01:16:52.041807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:16:52.043559 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Dec 13 01:16:52.045514 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Dec 13 01:16:52.047447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:16:52.049525 systemd[1]: Stopped target sysinit.target - System Initialization.
Dec 13 01:16:52.051279 systemd[1]: Stopped target local-fs.target - Local File Systems.
Dec 13 01:16:52.053247 systemd[1]: Stopped target swap.target - Swaps.
Dec 13 01:16:52.054809 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:16:52.054935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:16:52.057377 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:16:52.059258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:16:52.061144 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Dec 13 01:16:52.065517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:16:52.066760 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:16:52.066875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:16:52.069727 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 01:16:52.069850 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:16:52.071754 systemd[1]: Stopped target paths.target - Path Units.
Dec 13 01:16:52.073291 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:16:52.073417 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:16:52.075429 systemd[1]: Stopped target slices.target - Slice Units.
Dec 13 01:16:52.076987 systemd[1]: Stopped target sockets.target - Socket Units.
Dec 13 01:16:52.078726 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 01:16:52.078813 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:16:52.080854 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 01:16:52.080932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:16:52.082495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 01:16:52.082605 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:16:52.084349 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 01:16:52.084451 systemd[1]: Stopped ignition-files.service - Ignition (files).
Dec 13 01:16:52.097650 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Dec 13 01:16:52.098576 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:16:52.098720 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:16:52.101807 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Dec 13 01:16:52.103362 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:16:52.103542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:16:52.106638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:16:52.109526 ignition[996]: INFO     : Ignition 2.19.0
Dec 13 01:16:52.109526 ignition[996]: INFO     : Stage: umount
Dec 13 01:16:52.109526 ignition[996]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:16:52.109526 ignition[996]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Dec 13 01:16:52.107220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:16:52.117719 ignition[996]: INFO     : umount: umount passed
Dec 13 01:16:52.117719 ignition[996]: INFO     : Ignition finished successfully
Dec 13 01:16:52.111287 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 01:16:52.111384 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Dec 13 01:16:52.113400 systemd[1]: Stopped target network.target - Network.
Dec 13 01:16:52.114511 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 01:16:52.114580 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Dec 13 01:16:52.116718 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 01:16:52.116765 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Dec 13 01:16:52.118716 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 01:16:52.118761 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Dec 13 01:16:52.120351 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Dec 13 01:16:52.120402 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Dec 13 01:16:52.122334 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Dec 13 01:16:52.124486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Dec 13 01:16:52.126318 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 01:16:52.126944 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:16:52.127040 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Dec 13 01:16:52.128664 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 01:16:52.128754 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Dec 13 01:16:52.134244 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 01:16:52.134294 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Dec 13 01:16:52.138512 systemd-networkd[765]: eth0: DHCPv6 lease lost
Dec 13 01:16:52.140424 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 01:16:52.140538 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Dec 13 01:16:52.142553 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 01:16:52.142613 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:16:52.152708 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Dec 13 01:16:52.154608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 01:16:52.154673 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:16:52.156665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:16:52.161236 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 01:16:52.161325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Dec 13 01:16:52.164852 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:16:52.164935 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:16:52.166933 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 01:16:52.166985 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:16:52.168969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 01:16:52.169017 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:16:52.173040 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 01:16:52.173136 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Dec 13 01:16:52.177807 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:16:52.177930 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:16:52.180031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:16:52.180069 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:16:52.181689 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:16:52.181725 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:16:52.183662 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:16:52.183708 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:16:52.186324 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:16:52.186369 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:16:52.189179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:16:52.189236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:16:52.199625 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Dec 13 01:16:52.201036 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:16:52.201110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:16:52.203274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:16:52.203324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:16:52.205538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:16:52.205668 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Dec 13 01:16:52.207862 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Dec 13 01:16:52.210053 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Dec 13 01:16:52.220148 systemd[1]: Switching root.
Dec 13 01:16:52.250912 systemd-journald[237]: Journal stopped
Dec 13 01:16:52.972586 systemd-journald[237]: Received SIGTERM from PID 1 (systemd).
Dec 13 01:16:52.972642 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:16:52.972655 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:16:52.972668 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:16:52.972680 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:16:52.972690 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:16:52.972700 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:16:52.972709 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 01:16:52.972723 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 01:16:52.972734 kernel: audit: type=1403 audit(1734052612.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:16:52.972745 systemd[1]: Successfully loaded SELinux policy in 32.175ms.
Dec 13 01:16:52.972767 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.775ms.
Dec 13 01:16:52.972779 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 01:16:52.972791 systemd[1]: Detected virtualization kvm.
Dec 13 01:16:52.972802 systemd[1]: Detected architecture arm64.
Dec 13 01:16:52.972813 systemd[1]: Detected first boot.
Dec 13 01:16:52.972824 systemd[1]: Initializing machine ID from VM UUID.
Dec 13 01:16:52.972835 zram_generator::config[1039]: No configuration found.
Dec 13 01:16:52.972848 systemd[1]: Populated /etc with preset unit settings.
Dec 13 01:16:52.972859 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 01:16:52.972872 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Dec 13 01:16:52.972883 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:16:52.972894 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Dec 13 01:16:52.972906 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Dec 13 01:16:52.972917 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Dec 13 01:16:52.972929 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Dec 13 01:16:52.972940 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Dec 13 01:16:52.972952 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Dec 13 01:16:52.972964 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Dec 13 01:16:52.972976 systemd[1]: Created slice user.slice - User and Session Slice.
Dec 13 01:16:52.972988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:16:52.972999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:16:52.973010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Dec 13 01:16:52.973022 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Dec 13 01:16:52.973033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Dec 13 01:16:52.973045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 01:16:52.973056 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Dec 13 01:16:52.973069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:16:52.973080 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Dec 13 01:16:52.973091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Dec 13 01:16:52.973102 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Dec 13 01:16:52.973114 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Dec 13 01:16:52.973125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:16:52.973136 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 01:16:52.973147 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 01:16:52.973160 systemd[1]: Reached target swap.target - Swaps.
Dec 13 01:16:52.973172 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Dec 13 01:16:52.973183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Dec 13 01:16:52.973199 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:16:52.973211 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:16:52.973222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:16:52.973233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Dec 13 01:16:52.973244 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Dec 13 01:16:52.973256 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Dec 13 01:16:52.973268 systemd[1]: Mounting media.mount - External Media Directory...
Dec 13 01:16:52.973280 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Dec 13 01:16:52.973291 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Dec 13 01:16:52.973302 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Dec 13 01:16:52.973314 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 01:16:52.973326 systemd[1]: Reached target machines.target - Containers.
Dec 13 01:16:52.973338 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Dec 13 01:16:52.973350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:16:52.973362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 01:16:52.973374 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Dec 13 01:16:52.973385 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:16:52.973396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 01:16:52.973407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:16:52.973420 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Dec 13 01:16:52.973431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:16:52.973443 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 01:16:52.973454 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 01:16:52.973518 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Dec 13 01:16:52.973530 kernel: fuse: init (API version 7.39)
Dec 13 01:16:52.973541 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 01:16:52.973552 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 01:16:52.973563 kernel: loop: module loaded
Dec 13 01:16:52.973573 kernel: ACPI: bus type drm_connector registered
Dec 13 01:16:52.973584 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 01:16:52.973595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 01:16:52.973607 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Dec 13 01:16:52.973620 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Dec 13 01:16:52.973632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 01:16:52.973643 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 01:16:52.973674 systemd-journald[1113]: Collecting audit messages is disabled.
Dec 13 01:16:52.973702 systemd[1]: Stopped verity-setup.service.
Dec 13 01:16:52.973714 systemd-journald[1113]: Journal started
Dec 13 01:16:52.973737 systemd-journald[1113]: Runtime Journal (/run/log/journal/f062045c02b0435e9093e5ce98ca94e0) is 5.9M, max 47.3M, 41.4M free.
Dec 13 01:16:52.757778 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 01:16:52.775316 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Dec 13 01:16:52.775674 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 01:16:52.976338 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 01:16:52.976974 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Dec 13 01:16:52.978161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Dec 13 01:16:52.979515 systemd[1]: Mounted media.mount - External Media Directory.
Dec 13 01:16:52.980610 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Dec 13 01:16:52.981874 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Dec 13 01:16:52.983080 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Dec 13 01:16:52.984546 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Dec 13 01:16:52.986038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:16:52.987610 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:16:52.987756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Dec 13 01:16:52.989172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:16:52.989329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:16:52.990897 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:16:52.991040 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 01:16:52.992553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:16:52.992697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:16:52.994173 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 01:16:52.994330 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Dec 13 01:16:52.995711 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:16:52.995851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:16:52.997348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:16:52.998819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Dec 13 01:16:53.000554 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Dec 13 01:16:53.013577 systemd[1]: Reached target network-pre.target - Preparation for Network.
Dec 13 01:16:53.026565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Dec 13 01:16:53.028864 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Dec 13 01:16:53.030133 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 01:16:53.030180 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 01:16:53.032230 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Dec 13 01:16:53.034583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Dec 13 01:16:53.036778 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Dec 13 01:16:53.037893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:16:53.039949 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Dec 13 01:16:53.042082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Dec 13 01:16:53.043398 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:16:53.045667 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Dec 13 01:16:53.046950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:16:53.050660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 01:16:53.051796 systemd-journald[1113]: Time spent on flushing to /var/log/journal/f062045c02b0435e9093e5ce98ca94e0 is 12.949ms for 854 entries.
Dec 13 01:16:53.051796 systemd-journald[1113]: System Journal (/var/log/journal/f062045c02b0435e9093e5ce98ca94e0) is 8.0M, max 195.6M, 187.6M free.
Dec 13 01:16:53.070524 systemd-journald[1113]: Received client request to flush runtime journal.
Dec 13 01:16:53.056666 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Dec 13 01:16:53.059278 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Dec 13 01:16:53.062856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:16:53.064322 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Dec 13 01:16:53.065687 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Dec 13 01:16:53.068495 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Dec 13 01:16:53.070157 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Dec 13 01:16:53.073002 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Dec 13 01:16:53.082851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Dec 13 01:16:53.091634 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Dec 13 01:16:53.094651 kernel: loop0: detected capacity change from 0 to 114432
Dec 13 01:16:53.095662 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Dec 13 01:16:53.099512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:16:53.106494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 01:16:53.115608 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Dec 13 01:16:53.143796 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Dec 13 01:16:53.154787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 01:16:53.164662 kernel: loop1: detected capacity change from 0 to 194512
Dec 13 01:16:53.177446 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 01:16:53.178097 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Dec 13 01:16:53.179666 systemd-tmpfiles[1169]: ACLs are not supported, ignoring.
Dec 13 01:16:53.179685 systemd-tmpfiles[1169]: ACLs are not supported, ignoring.
Dec 13 01:16:53.186537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:16:53.201622 kernel: loop2: detected capacity change from 0 to 114328
Dec 13 01:16:53.242487 kernel: loop3: detected capacity change from 0 to 114432
Dec 13 01:16:53.247491 kernel: loop4: detected capacity change from 0 to 194512
Dec 13 01:16:53.257489 kernel: loop5: detected capacity change from 0 to 114328
Dec 13 01:16:53.267891 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Dec 13 01:16:53.268528 (sd-merge)[1175]: Merged extensions into '/usr'.
Dec 13 01:16:53.273150 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)...
Dec 13 01:16:53.273168 systemd[1]: Reloading...
Dec 13 01:16:53.334492 zram_generator::config[1205]: No configuration found.
Dec 13 01:16:53.396484 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 01:16:53.428755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:16:53.464486 systemd[1]: Reloading finished in 190 ms.
Dec 13 01:16:53.491110 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Dec 13 01:16:53.493595 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Dec 13 01:16:53.507674 systemd[1]: Starting ensure-sysext.service...
Dec 13 01:16:53.509583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 01:16:53.524611 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)...
Dec 13 01:16:53.524626 systemd[1]: Reloading...
Dec 13 01:16:53.527791 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 01:16:53.528043 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Dec 13 01:16:53.528835 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 01:16:53.529050 systemd-tmpfiles[1237]: ACLs are not supported, ignoring.
Dec 13 01:16:53.529102 systemd-tmpfiles[1237]: ACLs are not supported, ignoring.
Dec 13 01:16:53.531596 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 01:16:53.531609 systemd-tmpfiles[1237]: Skipping /boot
Dec 13 01:16:53.538422 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 01:16:53.538438 systemd-tmpfiles[1237]: Skipping /boot
Dec 13 01:16:53.572530 zram_generator::config[1264]: No configuration found.
Dec 13 01:16:53.651569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:16:53.687298 systemd[1]: Reloading finished in 162 ms.
Dec 13 01:16:53.701427 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Dec 13 01:16:53.714916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:16:53.722663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Dec 13 01:16:53.725506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Dec 13 01:16:53.728706 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Dec 13 01:16:53.731656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 01:16:53.736090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:16:53.742617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Dec 13 01:16:53.746113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:16:53.748692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:16:53.753733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:16:53.756755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:16:53.758663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:16:53.761042 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Dec 13 01:16:53.765505 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Dec 13 01:16:53.767685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:16:53.767916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:16:53.769725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:16:53.769850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:16:53.772084 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:16:53.772409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:16:53.779770 systemd-udevd[1306]: Using default interface naming scheme 'v255'.
Dec 13 01:16:53.780943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:16:53.806250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:16:53.808484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:16:53.811039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:16:53.812143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:16:53.814702 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Dec 13 01:16:53.818976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:16:53.820735 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Dec 13 01:16:53.822309 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Dec 13 01:16:53.824198 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Dec 13 01:16:53.825931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:16:53.826056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:16:53.828011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:16:53.828157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:16:53.829889 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:16:53.830018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:16:53.839666 augenrules[1348]: No rules
Dec 13 01:16:53.841706 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Dec 13 01:16:53.846632 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Dec 13 01:16:53.856516 systemd[1]: Finished ensure-sysext.service.
Dec 13 01:16:53.860337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:16:53.869751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:16:53.874654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 01:16:53.879549 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1337)
Dec 13 01:16:53.879690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:16:53.882069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:16:53.883792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:16:53.885451 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 01:16:53.889600 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Dec 13 01:16:53.892300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 01:16:53.892546 systemd-resolved[1305]: Positive Trust Anchors:
Dec 13 01:16:53.892566 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:16:53.892599 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 01:16:53.892865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:16:53.893086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:16:53.896797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1370)
Dec 13 01:16:53.896836 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1337)
Dec 13 01:16:53.898218 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:16:53.898379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 01:16:53.903055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:16:53.903203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:16:53.905213 systemd-resolved[1305]: Defaulting to hostname 'linux'.
Dec 13 01:16:53.905999 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:16:53.906250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:16:53.909039 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 01:16:53.914788 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Dec 13 01:16:53.922020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:16:53.923328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:16:53.923412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:16:53.949781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Dec 13 01:16:53.961169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Dec 13 01:16:53.968541 systemd-networkd[1375]: lo: Link UP
Dec 13 01:16:53.968548 systemd-networkd[1375]: lo: Gained carrier
Dec 13 01:16:53.969240 systemd-networkd[1375]: Enumeration completed
Dec 13 01:16:53.969364 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 01:16:53.970257 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 01:16:53.970263 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:16:53.970725 systemd[1]: Reached target network.target - Network.
Dec 13 01:16:53.972647 systemd-networkd[1375]: eth0: Link UP
Dec 13 01:16:53.972738 systemd-networkd[1375]: eth0: Gained carrier
Dec 13 01:16:53.972795 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Dec 13 01:16:53.972916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Dec 13 01:16:53.977149 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Dec 13 01:16:53.978901 systemd[1]: Reached target time-set.target - System Time Set.
Dec 13 01:16:53.987600 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1
Dec 13 01:16:53.988233 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection.
Dec 13 01:16:53.988356 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Dec 13 01:16:54.399857 systemd-resolved[1305]: Clock change detected. Flushing caches.
Dec 13 01:16:54.399912 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Dec 13 01:16:54.399954 systemd-timesyncd[1376]: Initial clock synchronization to Fri 2024-12-13 01:16:54.399814 UTC.
Dec 13 01:16:54.429104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:16:54.436233 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Dec 13 01:16:54.439484 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Dec 13 01:16:54.462759 lvm[1397]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:16:54.470867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:16:54.500323 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Dec 13 01:16:54.501863 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:16:54.502964 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 01:16:54.504076 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Dec 13 01:16:54.505296 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Dec 13 01:16:54.506669 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Dec 13 01:16:54.507828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Dec 13 01:16:54.509038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Dec 13 01:16:54.510226 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 01:16:54.510264 systemd[1]: Reached target paths.target - Path Units.
Dec 13 01:16:54.511125 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 01:16:54.515252 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Dec 13 01:16:54.517566 systemd[1]: Starting docker.socket - Docker Socket for the API...
Dec 13 01:16:54.532847 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Dec 13 01:16:54.535169 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Dec 13 01:16:54.536759 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Dec 13 01:16:54.537972 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 01:16:54.538965 systemd[1]: Reached target basic.target - Basic System.
Dec 13 01:16:54.539893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Dec 13 01:16:54.539926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Dec 13 01:16:54.540965 systemd[1]: Starting containerd.service - containerd container runtime...
Dec 13 01:16:54.543046 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Dec 13 01:16:54.544006 lvm[1405]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:16:54.546024 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Dec 13 01:16:54.549023 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Dec 13 01:16:54.550251 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Dec 13 01:16:54.553038 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Dec 13 01:16:54.557166 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Dec 13 01:16:54.559612 jq[1408]: false
Dec 13 01:16:54.560015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Dec 13 01:16:54.565368 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Dec 13 01:16:54.569164 systemd[1]: Starting systemd-logind.service - User Login Management...
Dec 13 01:16:54.571139 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 01:16:54.571960 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found loop3
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found loop4
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found loop5
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda1
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda2
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda3
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found usr
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda4
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda6
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda7
Dec 13 01:16:54.572759 extend-filesystems[1409]: Found vda9
Dec 13 01:16:54.572759 extend-filesystems[1409]: Checking size of /dev/vda9
Dec 13 01:16:54.572591 systemd[1]: Starting update-engine.service - Update Engine...
Dec 13 01:16:54.573596 dbus-daemon[1407]: [system] SELinux support is enabled
Dec 13 01:16:54.575492 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Dec 13 01:16:54.582461 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Dec 13 01:16:54.587113 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Dec 13 01:16:54.602171 jq[1423]: true
Dec 13 01:16:54.590272 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 01:16:54.590436 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Dec 13 01:16:54.590694 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 01:16:54.590836 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Dec 13 01:16:54.594807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 01:16:54.594985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Dec 13 01:16:54.611892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 01:16:54.611933 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Dec 13 01:16:54.613917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 01:16:54.614112 extend-filesystems[1409]: Resized partition /dev/vda9
Dec 13 01:16:54.613962 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Dec 13 01:16:54.619323 jq[1430]: true
Dec 13 01:16:54.623906 tar[1428]: linux-arm64/helm
Dec 13 01:16:54.625921 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024)
Dec 13 01:16:54.632974 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Dec 13 01:16:54.633557 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Dec 13 01:16:54.640504 update_engine[1421]: I20241213 01:16:54.640254  1421 main.cc:92] Flatcar Update Engine starting
Dec 13 01:16:54.641855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1335)
Dec 13 01:16:54.645178 systemd[1]: Started update-engine.service - Update Engine.
Dec 13 01:16:54.645303 update_engine[1421]: I20241213 01:16:54.645216  1421 update_check_scheduler.cc:74] Next update check in 3m11s
Dec 13 01:16:54.651928 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Dec 13 01:16:54.654635 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 01:16:54.654927 systemd-logind[1418]: New seat seat0.
Dec 13 01:16:54.656383 systemd[1]: Started systemd-logind.service - User Login Management.
Dec 13 01:16:54.682849 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Dec 13 01:16:54.694990 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Dec 13 01:16:54.694990 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1
Dec 13 01:16:54.694990 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Dec 13 01:16:54.701743 extend-filesystems[1409]: Resized filesystem in /dev/vda9
Dec 13 01:16:54.697084 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 01:16:54.697268 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Dec 13 01:16:54.713834 bash[1461]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 01:16:54.719872 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Dec 13 01:16:54.722412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Dec 13 01:16:54.728337 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 01:16:54.764051 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 01:16:54.783876 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Dec 13 01:16:54.793281 systemd[1]: Starting issuegen.service - Generate /run/issue...
Dec 13 01:16:54.799030 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 01:16:54.799510 systemd[1]: Finished issuegen.service - Generate /run/issue.
Dec 13 01:16:54.802810 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Dec 13 01:16:54.817686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Dec 13 01:16:54.830318 systemd[1]: Started getty@tty1.service - Getty on tty1.
Dec 13 01:16:54.832811 containerd[1443]: time="2024-12-13T01:16:54.831324473Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Dec 13 01:16:54.833299 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Dec 13 01:16:54.834854 systemd[1]: Reached target getty.target - Login Prompts.
Dec 13 01:16:54.857032 containerd[1443]: time="2024-12-13T01:16:54.856950553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859014 containerd[1443]: time="2024-12-13T01:16:54.858511633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859014 containerd[1443]: time="2024-12-13T01:16:54.858726473Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 01:16:54.859014 containerd[1443]: time="2024-12-13T01:16:54.858760633Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 01:16:54.859014 containerd[1443]: time="2024-12-13T01:16:54.858945833Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Dec 13 01:16:54.859014 containerd[1443]: time="2024-12-13T01:16:54.858971393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859169 containerd[1443]: time="2024-12-13T01:16:54.859059513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859169 containerd[1443]: time="2024-12-13T01:16:54.859083073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859392 containerd[1443]: time="2024-12-13T01:16:54.859365153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859392 containerd[1443]: time="2024-12-13T01:16:54.859390873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859453 containerd[1443]: time="2024-12-13T01:16:54.859413993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859453 containerd[1443]: time="2024-12-13T01:16:54.859424433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859528 containerd[1443]: time="2024-12-13T01:16:54.859507873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859748 containerd[1443]: time="2024-12-13T01:16:54.859718113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859867 containerd[1443]: time="2024-12-13T01:16:54.859849393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:16:54.859894 containerd[1443]: time="2024-12-13T01:16:54.859867433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 01:16:54.859955 containerd[1443]: time="2024-12-13T01:16:54.859941873Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 01:16:54.859998 containerd[1443]: time="2024-12-13T01:16:54.859986433Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 01:16:54.867160 containerd[1443]: time="2024-12-13T01:16:54.867119033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 01:16:54.867218 containerd[1443]: time="2024-12-13T01:16:54.867177033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 01:16:54.867218 containerd[1443]: time="2024-12-13T01:16:54.867196913Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Dec 13 01:16:54.867218 containerd[1443]: time="2024-12-13T01:16:54.867214833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Dec 13 01:16:54.867277 containerd[1443]: time="2024-12-13T01:16:54.867231353Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 01:16:54.867380 containerd[1443]: time="2024-12-13T01:16:54.867362833Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867611233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867759073Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867777553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867792473Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867809393Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867843233Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867858593Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867876593Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867891033Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867903633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867917153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867928553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867948193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869368 containerd[1443]: time="2024-12-13T01:16:54.867962393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.867974553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.867986793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868000153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868015033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868026393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868038553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868050153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868064433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868075353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868085873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868100073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868119793Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868145273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868158993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869649 containerd[1443]: time="2024-12-13T01:16:54.868172393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868293553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868319073Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868330273Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868343273Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868352673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868364393Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868374073Z" level=info msg="NRI interface is disabled by configuration."
Dec 13 01:16:54.869910 containerd[1443]: time="2024-12-13T01:16:54.868384713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 01:16:54.870044 containerd[1443]: time="2024-12-13T01:16:54.868674633Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 01:16:54.870044 containerd[1443]: time="2024-12-13T01:16:54.868737713Z" level=info msg="Connect containerd service"
Dec 13 01:16:54.870044 containerd[1443]: time="2024-12-13T01:16:54.868766953Z" level=info msg="using legacy CRI server"
Dec 13 01:16:54.870044 containerd[1443]: time="2024-12-13T01:16:54.868774193Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 01:16:54.870044 containerd[1443]: time="2024-12-13T01:16:54.868871873Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 01:16:54.870443 containerd[1443]: time="2024-12-13T01:16:54.870411153Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 01:16:54.870742 containerd[1443]: time="2024-12-13T01:16:54.870672513Z" level=info msg="Start subscribing containerd event"
Dec 13 01:16:54.870742 containerd[1443]: time="2024-12-13T01:16:54.870729033Z" level=info msg="Start recovering state"
Dec 13 01:16:54.870806 containerd[1443]: time="2024-12-13T01:16:54.870791233Z" level=info msg="Start event monitor"
Dec 13 01:16:54.870840 containerd[1443]: time="2024-12-13T01:16:54.870805353Z" level=info msg="Start snapshots syncer"
Dec 13 01:16:54.870840 containerd[1443]: time="2024-12-13T01:16:54.870814713Z" level=info msg="Start cni network conf syncer for default"
Dec 13 01:16:54.870840 containerd[1443]: time="2024-12-13T01:16:54.870835673Z" level=info msg="Start streaming server"
Dec 13 01:16:54.871173 containerd[1443]: time="2024-12-13T01:16:54.871147553Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 01:16:54.871284 containerd[1443]: time="2024-12-13T01:16:54.871268673Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 01:16:54.871447 systemd[1]: Started containerd.service - containerd container runtime.
Dec 13 01:16:54.873253 containerd[1443]: time="2024-12-13T01:16:54.873213113Z" level=info msg="containerd successfully booted in 0.042634s"
Dec 13 01:16:55.018884 tar[1428]: linux-arm64/LICENSE
Dec 13 01:16:55.018983 tar[1428]: linux-arm64/README.md
Dec 13 01:16:55.031168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Dec 13 01:16:56.073010 systemd-networkd[1375]: eth0: Gained IPv6LL
Dec 13 01:16:56.075372 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Dec 13 01:16:56.077430 systemd[1]: Reached target network-online.target - Network is Online.
Dec 13 01:16:56.093106 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Dec 13 01:16:56.095695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:16:56.097931 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Dec 13 01:16:56.113145 systemd[1]: coreos-metadata.service: Deactivated successfully.
Dec 13 01:16:56.113386 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Dec 13 01:16:56.115025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Dec 13 01:16:56.127938 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Dec 13 01:16:56.642087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:16:56.643789 systemd[1]: Reached target multi-user.target - Multi-User System.
Dec 13 01:16:56.645560 systemd[1]: Startup finished in 552ms (kernel) + 4.684s (initrd) + 3.879s (userspace) = 9.115s.
Dec 13 01:16:56.646469 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 01:16:57.161595 kubelet[1521]: E1213 01:16:57.161465    1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:16:57.164172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:16:57.164324 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:17:00.762499 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Dec 13 01:17:00.763555 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:42164.service - OpenSSH per-connection server daemon (10.0.0.1:42164).
Dec 13 01:17:00.807475 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 42164 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:00.809027 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:00.816985 systemd-logind[1418]: New session 1 of user core.
Dec 13 01:17:00.817915 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Dec 13 01:17:00.824025 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Dec 13 01:17:00.832220 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Dec 13 01:17:00.835128 systemd[1]: Starting user@500.service - User Manager for UID 500...
Dec 13 01:17:00.840179 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:17:00.909483 systemd[1539]: Queued start job for default target default.target.
Dec 13 01:17:00.917742 systemd[1539]: Created slice app.slice - User Application Slice.
Dec 13 01:17:00.917787 systemd[1539]: Reached target paths.target - Paths.
Dec 13 01:17:00.917800 systemd[1539]: Reached target timers.target - Timers.
Dec 13 01:17:00.918931 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket...
Dec 13 01:17:00.927670 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Dec 13 01:17:00.927718 systemd[1539]: Reached target sockets.target - Sockets.
Dec 13 01:17:00.927729 systemd[1539]: Reached target basic.target - Basic System.
Dec 13 01:17:00.927761 systemd[1539]: Reached target default.target - Main User Target.
Dec 13 01:17:00.927788 systemd[1539]: Startup finished in 83ms.
Dec 13 01:17:00.928043 systemd[1]: Started user@500.service - User Manager for UID 500.
Dec 13 01:17:00.929232 systemd[1]: Started session-1.scope - Session 1 of User core.
Dec 13 01:17:00.993667 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:42180.service - OpenSSH per-connection server daemon (10.0.0.1:42180).
Dec 13 01:17:01.031783 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 42180 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:01.032916 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:01.036668 systemd-logind[1418]: New session 2 of user core.
Dec 13 01:17:01.042976 systemd[1]: Started session-2.scope - Session 2 of User core.
Dec 13 01:17:01.093769 sshd[1550]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:01.105135 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:42180.service: Deactivated successfully.
Dec 13 01:17:01.106467 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 01:17:01.108761 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit.
Dec 13 01:17:01.109772 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:42192.service - OpenSSH per-connection server daemon (10.0.0.1:42192).
Dec 13 01:17:01.110612 systemd-logind[1418]: Removed session 2.
Dec 13 01:17:01.143591 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 42192 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:01.144665 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:01.148172 systemd-logind[1418]: New session 3 of user core.
Dec 13 01:17:01.158941 systemd[1]: Started session-3.scope - Session 3 of User core.
Dec 13 01:17:01.206105 sshd[1557]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:01.219945 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:42192.service: Deactivated successfully.
Dec 13 01:17:01.221232 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 01:17:01.222422 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit.
Dec 13 01:17:01.223410 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:42208.service - OpenSSH per-connection server daemon (10.0.0.1:42208).
Dec 13 01:17:01.224110 systemd-logind[1418]: Removed session 3.
Dec 13 01:17:01.256845 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 42208 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:01.257954 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:01.261370 systemd-logind[1418]: New session 4 of user core.
Dec 13 01:17:01.279959 systemd[1]: Started session-4.scope - Session 4 of User core.
Dec 13 01:17:01.332577 sshd[1564]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:01.350326 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:42208.service: Deactivated successfully.
Dec 13 01:17:01.352013 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 01:17:01.353305 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit.
Dec 13 01:17:01.354354 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:42216.service - OpenSSH per-connection server daemon (10.0.0.1:42216).
Dec 13 01:17:01.355027 systemd-logind[1418]: Removed session 4.
Dec 13 01:17:01.389034 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 42216 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:01.390279 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:01.393880 systemd-logind[1418]: New session 5 of user core.
Dec 13 01:17:01.399960 systemd[1]: Started session-5.scope - Session 5 of User core.
Dec 13 01:17:01.458305 sudo[1574]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 01:17:01.458592 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 01:17:01.753030 systemd[1]: Starting docker.service - Docker Application Container Engine...
Dec 13 01:17:01.753191 (dockerd)[1594]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Dec 13 01:17:02.004401 dockerd[1594]: time="2024-12-13T01:17:02.004276313Z" level=info msg="Starting up"
Dec 13 01:17:02.273134 dockerd[1594]: time="2024-12-13T01:17:02.273037193Z" level=info msg="Loading containers: start."
Dec 13 01:17:02.369849 kernel: Initializing XFRM netlink socket
Dec 13 01:17:02.432504 systemd-networkd[1375]: docker0: Link UP
Dec 13 01:17:02.446931 dockerd[1594]: time="2024-12-13T01:17:02.446839233Z" level=info msg="Loading containers: done."
Dec 13 01:17:02.456827 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2933097410-merged.mount: Deactivated successfully.
Dec 13 01:17:02.459532 dockerd[1594]: time="2024-12-13T01:17:02.459488513Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 01:17:02.459628 dockerd[1594]: time="2024-12-13T01:17:02.459602073Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Dec 13 01:17:02.459751 dockerd[1594]: time="2024-12-13T01:17:02.459728353Z" level=info msg="Daemon has completed initialization"
Dec 13 01:17:02.484150 dockerd[1594]: time="2024-12-13T01:17:02.484025873Z" level=info msg="API listen on /run/docker.sock"
Dec 13 01:17:02.484327 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 13 01:17:03.236017 containerd[1443]: time="2024-12-13T01:17:03.235969113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\""
Dec 13 01:17:03.887459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705593595.mount: Deactivated successfully.
Dec 13 01:17:04.782396 containerd[1443]: time="2024-12-13T01:17:04.782343633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:04.783914 containerd[1443]: time="2024-12-13T01:17:04.783882153Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252"
Dec 13 01:17:04.784912 containerd[1443]: time="2024-12-13T01:17:04.784874473Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:04.787748 containerd[1443]: time="2024-12-13T01:17:04.787688273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:04.788964 containerd[1443]: time="2024-12-13T01:17:04.788930753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.55291584s"
Dec 13 01:17:04.789491 containerd[1443]: time="2024-12-13T01:17:04.789035833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\""
Dec 13 01:17:04.807213 containerd[1443]: time="2024-12-13T01:17:04.807178633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\""
Dec 13 01:17:06.138223 containerd[1443]: time="2024-12-13T01:17:06.138175393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.139168 containerd[1443]: time="2024-12-13T01:17:06.138914473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299"
Dec 13 01:17:06.139654 containerd[1443]: time="2024-12-13T01:17:06.139629433Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.142590 containerd[1443]: time="2024-12-13T01:17:06.142555113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.143744 containerd[1443]: time="2024-12-13T01:17:06.143707353Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.33648964s"
Dec 13 01:17:06.143744 containerd[1443]: time="2024-12-13T01:17:06.143740793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\""
Dec 13 01:17:06.162149 containerd[1443]: time="2024-12-13T01:17:06.162067673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\""
Dec 13 01:17:06.958346 containerd[1443]: time="2024-12-13T01:17:06.958298033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.958883 containerd[1443]: time="2024-12-13T01:17:06.958847193Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642"
Dec 13 01:17:06.959847 containerd[1443]: time="2024-12-13T01:17:06.959788113Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.962936 containerd[1443]: time="2024-12-13T01:17:06.962884753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:06.963994 containerd[1443]: time="2024-12-13T01:17:06.963955073Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 801.80312ms"
Dec 13 01:17:06.963994 containerd[1443]: time="2024-12-13T01:17:06.963991753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\""
Dec 13 01:17:06.981988 containerd[1443]: time="2024-12-13T01:17:06.981923193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Dec 13 01:17:07.329085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:17:07.340010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:17:07.427471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:07.431170 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 01:17:07.476456 kubelet[1836]: E1213 01:17:07.476398    1836 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:17:07.479971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:17:07.480113 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:17:07.947670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355655043.mount: Deactivated successfully.
Dec 13 01:17:08.313746 containerd[1443]: time="2024-12-13T01:17:08.313632193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:08.314757 containerd[1443]: time="2024-12-13T01:17:08.314689273Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979"
Dec 13 01:17:08.315511 containerd[1443]: time="2024-12-13T01:17:08.315438953Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:08.317744 containerd[1443]: time="2024-12-13T01:17:08.317685873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:08.319540 containerd[1443]: time="2024-12-13T01:17:08.319297713Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.3373338s"
Dec 13 01:17:08.319540 containerd[1443]: time="2024-12-13T01:17:08.319339873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\""
Dec 13 01:17:08.337906 containerd[1443]: time="2024-12-13T01:17:08.337869473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 01:17:09.008198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747741663.mount: Deactivated successfully.
Dec 13 01:17:09.588657 containerd[1443]: time="2024-12-13T01:17:09.588604553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:09.593634 containerd[1443]: time="2024-12-13T01:17:09.593362353Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Dec 13 01:17:09.594733 containerd[1443]: time="2024-12-13T01:17:09.594700473Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:09.598854 containerd[1443]: time="2024-12-13T01:17:09.598633233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:09.599447 containerd[1443]: time="2024-12-13T01:17:09.599392473Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.26130956s"
Dec 13 01:17:09.599447 containerd[1443]: time="2024-12-13T01:17:09.599434513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Dec 13 01:17:09.617221 containerd[1443]: time="2024-12-13T01:17:09.617176633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Dec 13 01:17:10.049899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037292138.mount: Deactivated successfully.
Dec 13 01:17:10.059284 containerd[1443]: time="2024-12-13T01:17:10.059234393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:10.059962 containerd[1443]: time="2024-12-13T01:17:10.059890153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823"
Dec 13 01:17:10.060957 containerd[1443]: time="2024-12-13T01:17:10.060892993Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:10.063071 containerd[1443]: time="2024-12-13T01:17:10.063027913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:10.063911 containerd[1443]: time="2024-12-13T01:17:10.063781593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 446.5664ms"
Dec 13 01:17:10.063911 containerd[1443]: time="2024-12-13T01:17:10.063813353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Dec 13 01:17:10.082143 containerd[1443]: time="2024-12-13T01:17:10.082051553Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Dec 13 01:17:10.618056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039089740.mount: Deactivated successfully.
Dec 13 01:17:11.819883 containerd[1443]: time="2024-12-13T01:17:11.819833793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:11.820751 containerd[1443]: time="2024-12-13T01:17:11.820614393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788"
Dec 13 01:17:11.822856 containerd[1443]: time="2024-12-13T01:17:11.822188713Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:11.824998 containerd[1443]: time="2024-12-13T01:17:11.824960593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:11.826402 containerd[1443]: time="2024-12-13T01:17:11.826303393Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.74421104s"
Dec 13 01:17:11.826402 containerd[1443]: time="2024-12-13T01:17:11.826350433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Dec 13 01:17:16.646111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:16.657205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:17:16.670628 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-5.scope)...
Dec 13 01:17:16.670644 systemd[1]: Reloading...
Dec 13 01:17:16.731863 zram_generator::config[2085]: No configuration found.
Dec 13 01:17:16.841108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:17:16.892561 systemd[1]: Reloading finished in 221 ms.
Dec 13 01:17:16.934425 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Dec 13 01:17:16.934572 systemd[1]: kubelet.service: Failed with result 'signal'.
Dec 13 01:17:16.934899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:16.936662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:17:17.023534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:17.027769 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 01:17:17.066071 kubelet[2132]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:17:17.066071 kubelet[2132]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:17:17.066071 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:17:17.066406 kubelet[2132]: I1213 01:17:17.066114    2132 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:17:17.865860 kubelet[2132]: I1213 01:17:17.865673    2132 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:17:17.865860 kubelet[2132]: I1213 01:17:17.865705    2132 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:17:17.865996 kubelet[2132]: I1213 01:17:17.865926    2132 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:17:17.890211 kubelet[2132]: I1213 01:17:17.890160    2132 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:17:17.890211 kubelet[2132]: E1213 01:17:17.890134    2132 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.900257 kubelet[2132]: I1213 01:17:17.900231    2132 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:17:17.901136 kubelet[2132]: I1213 01:17:17.901107    2132 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:17:17.901323 kubelet[2132]: I1213 01:17:17.901298    2132 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:17:17.901323 kubelet[2132]: I1213 01:17:17.901324    2132 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:17:17.901431 kubelet[2132]: I1213 01:17:17.901333    2132 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:17:17.901455 kubelet[2132]: I1213 01:17:17.901443    2132 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:17:17.905455 kubelet[2132]: I1213 01:17:17.905297    2132 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:17:17.905455 kubelet[2132]: I1213 01:17:17.905325    2132 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:17:17.905455 kubelet[2132]: I1213 01:17:17.905347    2132 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:17:17.905455 kubelet[2132]: I1213 01:17:17.905361    2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:17:17.906185 kubelet[2132]: W1213 01:17:17.906108    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.906185 kubelet[2132]: E1213 01:17:17.906163    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.907847 kubelet[2132]: W1213 01:17:17.907703    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.907847 kubelet[2132]: E1213 01:17:17.907756    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.908045 kubelet[2132]: I1213 01:17:17.907983    2132 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 01:17:17.908624 kubelet[2132]: I1213 01:17:17.908609    2132 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:17:17.908749 kubelet[2132]: W1213 01:17:17.908738    2132 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 01:17:17.909656 kubelet[2132]: I1213 01:17:17.909628    2132 server.go:1256] "Started kubelet"
Dec 13 01:17:17.910855 kubelet[2132]: I1213 01:17:17.910712    2132 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:17:17.910998 kubelet[2132]: I1213 01:17:17.910978    2132 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:17:17.911050 kubelet[2132]: I1213 01:17:17.911040    2132 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:17:17.911770 kubelet[2132]: I1213 01:17:17.911735    2132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:17:17.911987 kubelet[2132]: I1213 01:17:17.911935    2132 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:17:17.918083 kubelet[2132]: E1213 01:17:17.917916    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:17.918083 kubelet[2132]: I1213 01:17:17.918006    2132 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:17:17.918667 kubelet[2132]: I1213 01:17:17.918639    2132 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:17:17.918881 kubelet[2132]: I1213 01:17:17.918723    2132 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:17:17.919147 kubelet[2132]: W1213 01:17:17.919101    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.919194 kubelet[2132]: E1213 01:17:17.919153    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.919919 kubelet[2132]: E1213 01:17:17.919487    2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms"
Dec 13 01:17:17.919919 kubelet[2132]: E1213 01:17:17.919502    2132 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097afd06e4ae1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:17.909600993 +0000 UTC m=+0.878599881,LastTimestamp:2024-12-13 01:17:17.909600993 +0000 UTC m=+0.878599881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Dec 13 01:17:17.919919 kubelet[2132]: I1213 01:17:17.919907    2132 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:17:17.920082 kubelet[2132]: I1213 01:17:17.919994    2132 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:17:17.920580 kubelet[2132]: E1213 01:17:17.920562    2132 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 01:17:17.921261 kubelet[2132]: I1213 01:17:17.921209    2132 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:17:17.928099 kubelet[2132]: I1213 01:17:17.927044    2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:17:17.928176 kubelet[2132]: I1213 01:17:17.928158    2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:17:17.928201 kubelet[2132]: I1213 01:17:17.928183    2132 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:17:17.928228 kubelet[2132]: I1213 01:17:17.928206    2132 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:17:17.928391 kubelet[2132]: E1213 01:17:17.928370    2132 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:17:17.928782 kubelet[2132]: W1213 01:17:17.928759    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.928844 kubelet[2132]: E1213 01:17:17.928788    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:17.933473 kubelet[2132]: I1213 01:17:17.933454    2132 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:17:17.933473 kubelet[2132]: I1213 01:17:17.933471    2132 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:17:17.933473 kubelet[2132]: I1213 01:17:17.933497    2132 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:17:18.000296 kubelet[2132]: I1213 01:17:18.000252    2132 policy_none.go:49] "None policy: Start"
Dec 13 01:17:18.001181 kubelet[2132]: I1213 01:17:18.001128    2132 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:17:18.001260 kubelet[2132]: I1213 01:17:18.001198    2132 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:17:18.006333 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Dec 13 01:17:18.019334 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Dec 13 01:17:18.019915 kubelet[2132]: I1213 01:17:18.019875    2132 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:18.020281 kubelet[2132]: E1213 01:17:18.020251    2132 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost"
Dec 13 01:17:18.022125 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Dec 13 01:17:18.028782 kubelet[2132]: E1213 01:17:18.028753    2132 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Dec 13 01:17:18.032850 kubelet[2132]: I1213 01:17:18.032551    2132 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:17:18.032921 kubelet[2132]: I1213 01:17:18.032853    2132 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:17:18.034201 kubelet[2132]: E1213 01:17:18.034173    2132 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Dec 13 01:17:18.120688 kubelet[2132]: E1213 01:17:18.120556    2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms"
Dec 13 01:17:18.221872 kubelet[2132]: I1213 01:17:18.221755    2132 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:18.222153 kubelet[2132]: E1213 01:17:18.222122    2132 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost"
Dec 13 01:17:18.229230 kubelet[2132]: I1213 01:17:18.229194    2132 topology_manager.go:215] "Topology Admit Handler" podUID="c63910c03e8e1adc8e951711c367ca0f" podNamespace="kube-system" podName="kube-apiserver-localhost"
Dec 13 01:17:18.230254 kubelet[2132]: I1213 01:17:18.230230    2132 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Dec 13 01:17:18.231158 kubelet[2132]: I1213 01:17:18.231019    2132 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost"
Dec 13 01:17:18.237012 systemd[1]: Created slice kubepods-burstable-podc63910c03e8e1adc8e951711c367ca0f.slice - libcontainer container kubepods-burstable-podc63910c03e8e1adc8e951711c367ca0f.slice.
Dec 13 01:17:18.249931 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice.
Dec 13 01:17:18.263316 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice.
Dec 13 01:17:18.320138 kubelet[2132]: I1213 01:17:18.319972    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:18.320138 kubelet[2132]: I1213 01:17:18.320021    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:18.320138 kubelet[2132]: I1213 01:17:18.320051    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:18.320138 kubelet[2132]: I1213 01:17:18.320072    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:18.320138 kubelet[2132]: I1213 01:17:18.320119    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost"
Dec 13 01:17:18.320416 kubelet[2132]: I1213 01:17:18.320159    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:18.320416 kubelet[2132]: I1213 01:17:18.320183    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:18.320416 kubelet[2132]: I1213 01:17:18.320203    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:18.320416 kubelet[2132]: I1213 01:17:18.320223    2132 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:18.521140 kubelet[2132]: E1213 01:17:18.521036    2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms"
Dec 13 01:17:18.551008 kubelet[2132]: E1213 01:17:18.550969    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:18.551758 containerd[1443]: time="2024-12-13T01:17:18.551697273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c63910c03e8e1adc8e951711c367ca0f,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:18.562113 kubelet[2132]: E1213 01:17:18.561885    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:18.563138 containerd[1443]: time="2024-12-13T01:17:18.562988793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:18.565251 kubelet[2132]: E1213 01:17:18.565223    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:18.565590 containerd[1443]: time="2024-12-13T01:17:18.565560753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:18.626881 kubelet[2132]: I1213 01:17:18.626837    2132 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:18.627174 kubelet[2132]: E1213 01:17:18.627145    2132 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost"
Dec 13 01:17:18.838457 kubelet[2132]: W1213 01:17:18.838397    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:18.838585 kubelet[2132]: E1213 01:17:18.838482    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:18.901165 kubelet[2132]: W1213 01:17:18.901099    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:18.901165 kubelet[2132]: E1213 01:17:18.901161    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:18.998527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620749955.mount: Deactivated successfully.
Dec 13 01:17:19.003517 containerd[1443]: time="2024-12-13T01:17:19.003457153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:17:19.004298 containerd[1443]: time="2024-12-13T01:17:19.004268273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Dec 13 01:17:19.004980 containerd[1443]: time="2024-12-13T01:17:19.004925353Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:17:19.006231 containerd[1443]: time="2024-12-13T01:17:19.006090993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 01:17:19.006339 containerd[1443]: time="2024-12-13T01:17:19.006307713Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:17:19.007565 containerd[1443]: time="2024-12-13T01:17:19.007533753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 01:17:19.009744 containerd[1443]: time="2024-12-13T01:17:19.009706393Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:17:19.010868 containerd[1443]: time="2024-12-13T01:17:19.010771233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.9738ms"
Dec 13 01:17:19.012186 containerd[1443]: time="2024-12-13T01:17:19.012148233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.08616ms"
Dec 13 01:17:19.012938 containerd[1443]: time="2024-12-13T01:17:19.012625473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:17:19.017841 containerd[1443]: time="2024-12-13T01:17:19.017796273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 452.17384ms"
Dec 13 01:17:19.211433 kubelet[2132]: W1213 01:17:19.206500    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:19.211433 kubelet[2132]: E1213 01:17:19.211368    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229142433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229204553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229230953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229327073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229519953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229582153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:17:19.229622 containerd[1443]: time="2024-12-13T01:17:19.229599913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.229980 containerd[1443]: time="2024-12-13T01:17:19.229693313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.230560 containerd[1443]: time="2024-12-13T01:17:19.230460433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:17:19.230647 containerd[1443]: time="2024-12-13T01:17:19.230615393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:17:19.230647 containerd[1443]: time="2024-12-13T01:17:19.230633953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.230812 containerd[1443]: time="2024-12-13T01:17:19.230780673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:19.248998 systemd[1]: Started cri-containerd-0d056f2921b0570bc1ad24811070a869083a2d24114907e10fecd833b6cc7cc1.scope - libcontainer container 0d056f2921b0570bc1ad24811070a869083a2d24114907e10fecd833b6cc7cc1.
Dec 13 01:17:19.250394 systemd[1]: Started cri-containerd-28a2161793c14e85f33f06cee8022ccc88e8a72a35b71395dde2e944be830bc3.scope - libcontainer container 28a2161793c14e85f33f06cee8022ccc88e8a72a35b71395dde2e944be830bc3.
Dec 13 01:17:19.253651 systemd[1]: Started cri-containerd-cb6d00d9eee5f6caf498f3ba71336d51f4f6eb2ea0e95ace7a54d3495e533731.scope - libcontainer container cb6d00d9eee5f6caf498f3ba71336d51f4f6eb2ea0e95ace7a54d3495e533731.
Dec 13 01:17:19.286540 containerd[1443]: time="2024-12-13T01:17:19.286465353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d056f2921b0570bc1ad24811070a869083a2d24114907e10fecd833b6cc7cc1\""
Dec 13 01:17:19.287129 containerd[1443]: time="2024-12-13T01:17:19.286536273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28a2161793c14e85f33f06cee8022ccc88e8a72a35b71395dde2e944be830bc3\""
Dec 13 01:17:19.287796 containerd[1443]: time="2024-12-13T01:17:19.287756273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c63910c03e8e1adc8e951711c367ca0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb6d00d9eee5f6caf498f3ba71336d51f4f6eb2ea0e95ace7a54d3495e533731\""
Dec 13 01:17:19.289190 kubelet[2132]: E1213 01:17:19.289166    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:19.289438 kubelet[2132]: E1213 01:17:19.289299    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:19.289438 kubelet[2132]: E1213 01:17:19.289335    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:19.293101 containerd[1443]: time="2024-12-13T01:17:19.292906673Z" level=info msg="CreateContainer within sandbox \"0d056f2921b0570bc1ad24811070a869083a2d24114907e10fecd833b6cc7cc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 01:17:19.293101 containerd[1443]: time="2024-12-13T01:17:19.293015993Z" level=info msg="CreateContainer within sandbox \"cb6d00d9eee5f6caf498f3ba71336d51f4f6eb2ea0e95ace7a54d3495e533731\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 01:17:19.293194 containerd[1443]: time="2024-12-13T01:17:19.292908513Z" level=info msg="CreateContainer within sandbox \"28a2161793c14e85f33f06cee8022ccc88e8a72a35b71395dde2e944be830bc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 01:17:19.309419 containerd[1443]: time="2024-12-13T01:17:19.309375793Z" level=info msg="CreateContainer within sandbox \"0d056f2921b0570bc1ad24811070a869083a2d24114907e10fecd833b6cc7cc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56eb6adbda8187454ccfbbc618d6a911a6f9784331ed878ccbad85db0f92a57d\""
Dec 13 01:17:19.310376 containerd[1443]: time="2024-12-13T01:17:19.310347473Z" level=info msg="StartContainer for \"56eb6adbda8187454ccfbbc618d6a911a6f9784331ed878ccbad85db0f92a57d\""
Dec 13 01:17:19.312084 containerd[1443]: time="2024-12-13T01:17:19.312041473Z" level=info msg="CreateContainer within sandbox \"28a2161793c14e85f33f06cee8022ccc88e8a72a35b71395dde2e944be830bc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63973fc3c41928f4ab7ade39e7f21f34993e476a2e56ce5fab251c9b68846578\""
Dec 13 01:17:19.312469 containerd[1443]: time="2024-12-13T01:17:19.312444793Z" level=info msg="StartContainer for \"63973fc3c41928f4ab7ade39e7f21f34993e476a2e56ce5fab251c9b68846578\""
Dec 13 01:17:19.313814 containerd[1443]: time="2024-12-13T01:17:19.313764833Z" level=info msg="CreateContainer within sandbox \"cb6d00d9eee5f6caf498f3ba71336d51f4f6eb2ea0e95ace7a54d3495e533731\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f31e5ae919f4885598c0376f4e1395e9c64df4071408903e1d74d08ae7e8498\""
Dec 13 01:17:19.314172 containerd[1443]: time="2024-12-13T01:17:19.314145753Z" level=info msg="StartContainer for \"2f31e5ae919f4885598c0376f4e1395e9c64df4071408903e1d74d08ae7e8498\""
Dec 13 01:17:19.322251 kubelet[2132]: E1213 01:17:19.322214    2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="1.6s"
Dec 13 01:17:19.336013 systemd[1]: Started cri-containerd-56eb6adbda8187454ccfbbc618d6a911a6f9784331ed878ccbad85db0f92a57d.scope - libcontainer container 56eb6adbda8187454ccfbbc618d6a911a6f9784331ed878ccbad85db0f92a57d.
Dec 13 01:17:19.340771 systemd[1]: Started cri-containerd-2f31e5ae919f4885598c0376f4e1395e9c64df4071408903e1d74d08ae7e8498.scope - libcontainer container 2f31e5ae919f4885598c0376f4e1395e9c64df4071408903e1d74d08ae7e8498.
Dec 13 01:17:19.342152 systemd[1]: Started cri-containerd-63973fc3c41928f4ab7ade39e7f21f34993e476a2e56ce5fab251c9b68846578.scope - libcontainer container 63973fc3c41928f4ab7ade39e7f21f34993e476a2e56ce5fab251c9b68846578.
Dec 13 01:17:19.372707 containerd[1443]: time="2024-12-13T01:17:19.372542633Z" level=info msg="StartContainer for \"56eb6adbda8187454ccfbbc618d6a911a6f9784331ed878ccbad85db0f92a57d\" returns successfully"
Dec 13 01:17:19.396470 containerd[1443]: time="2024-12-13T01:17:19.393106353Z" level=info msg="StartContainer for \"63973fc3c41928f4ab7ade39e7f21f34993e476a2e56ce5fab251c9b68846578\" returns successfully"
Dec 13 01:17:19.396470 containerd[1443]: time="2024-12-13T01:17:19.393114113Z" level=info msg="StartContainer for \"2f31e5ae919f4885598c0376f4e1395e9c64df4071408903e1d74d08ae7e8498\" returns successfully"
Dec 13 01:17:19.432128 kubelet[2132]: I1213 01:17:19.430713    2132 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:19.432128 kubelet[2132]: E1213 01:17:19.431147    2132 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost"
Dec 13 01:17:19.445831 kubelet[2132]: W1213 01:17:19.445754    2132 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:19.445831 kubelet[2132]: E1213 01:17:19.445808    2132 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused
Dec 13 01:17:19.935430 kubelet[2132]: E1213 01:17:19.935393    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:19.938637 kubelet[2132]: E1213 01:17:19.937083    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:19.938637 kubelet[2132]: E1213 01:17:19.938498    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:20.940798 kubelet[2132]: E1213 01:17:20.940760    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:21.033419 kubelet[2132]: I1213 01:17:21.033233    2132 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:21.050516 kubelet[2132]: E1213 01:17:21.050478    2132 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Dec 13 01:17:21.082747 kubelet[2132]: E1213 01:17:21.082716    2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:21.151899 kubelet[2132]: I1213 01:17:21.151738    2132 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Dec 13 01:17:21.166128 kubelet[2132]: E1213 01:17:21.166095    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.267018 kubelet[2132]: E1213 01:17:21.266405    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.367020 kubelet[2132]: E1213 01:17:21.366978    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.468015 kubelet[2132]: E1213 01:17:21.467966    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.568566 kubelet[2132]: E1213 01:17:21.568453    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.669041 kubelet[2132]: E1213 01:17:21.668992    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.769689 kubelet[2132]: E1213 01:17:21.769648    2132 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:21.908204 kubelet[2132]: I1213 01:17:21.908117    2132 apiserver.go:52] "Watching apiserver"
Dec 13 01:17:21.919136 kubelet[2132]: I1213 01:17:21.919082    2132 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:17:23.663085 systemd[1]: Reloading requested from client PID 2408 ('systemctl') (unit session-5.scope)...
Dec 13 01:17:23.663382 systemd[1]: Reloading...
Dec 13 01:17:23.725850 zram_generator::config[2450]: No configuration found.
Dec 13 01:17:23.830503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:17:23.895013 systemd[1]: Reloading finished in 231 ms.
Dec 13 01:17:23.935646 kubelet[2132]: I1213 01:17:23.935516    2132 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:17:23.935675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:17:23.947808 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 01:17:23.948956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:23.949015 systemd[1]: kubelet.service: Consumed 1.244s CPU time, 113.8M memory peak, 0B memory swap peak.
Dec 13 01:17:23.959133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:17:24.049365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:17:24.055247 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 01:17:24.098859 kubelet[2489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:17:24.098859 kubelet[2489]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:17:24.098859 kubelet[2489]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:17:24.098859 kubelet[2489]: I1213 01:17:24.098398    2489 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:17:24.105355 kubelet[2489]: I1213 01:17:24.105254    2489 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:17:24.105355 kubelet[2489]: I1213 01:17:24.105285    2489 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:17:24.105532 kubelet[2489]: I1213 01:17:24.105477    2489 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:17:24.107441 kubelet[2489]: I1213 01:17:24.107090    2489 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 01:17:24.109257 kubelet[2489]: I1213 01:17:24.108860    2489 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:17:24.118489 kubelet[2489]: I1213 01:17:24.118451    2489 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:17:24.119216 kubelet[2489]: I1213 01:17:24.118835    2489 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:17:24.119216 kubelet[2489]: I1213 01:17:24.119009    2489 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:17:24.119216 kubelet[2489]: I1213 01:17:24.119033    2489 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:17:24.119216 kubelet[2489]: I1213 01:17:24.119042    2489 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:17:24.119216 kubelet[2489]: I1213 01:17:24.119076    2489 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:17:24.119451 kubelet[2489]: I1213 01:17:24.119435    2489 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:17:24.119982 kubelet[2489]: I1213 01:17:24.119964    2489 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:17:24.120103 kubelet[2489]: I1213 01:17:24.120089    2489 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:17:24.120252 kubelet[2489]: I1213 01:17:24.120240    2489 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:17:24.121615 kubelet[2489]: I1213 01:17:24.121593    2489 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 01:17:24.121790 kubelet[2489]: I1213 01:17:24.121770    2489 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:17:24.122205 kubelet[2489]: I1213 01:17:24.122181    2489 server.go:1256] "Started kubelet"
Dec 13 01:17:24.124706 kubelet[2489]: I1213 01:17:24.123298    2489 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:17:24.124706 kubelet[2489]: I1213 01:17:24.123522    2489 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:17:24.124706 kubelet[2489]: I1213 01:17:24.123574    2489 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:17:24.124706 kubelet[2489]: I1213 01:17:24.124300    2489 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:17:24.126902 kubelet[2489]: E1213 01:17:24.126866    2489 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 01:17:24.129523 kubelet[2489]: I1213 01:17:24.129479    2489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:17:24.135003 kubelet[2489]: I1213 01:17:24.133960    2489 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:17:24.135003 kubelet[2489]: I1213 01:17:24.134068    2489 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:17:24.135003 kubelet[2489]: I1213 01:17:24.134207    2489 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:17:24.135003 kubelet[2489]: E1213 01:17:24.134426    2489 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Dec 13 01:17:24.135737 kubelet[2489]: I1213 01:17:24.135688    2489 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:17:24.141454 kubelet[2489]: I1213 01:17:24.141403    2489 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:17:24.141454 kubelet[2489]: I1213 01:17:24.141425    2489 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:17:24.152287 kubelet[2489]: I1213 01:17:24.152244    2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:17:24.153816 kubelet[2489]: I1213 01:17:24.153715    2489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:17:24.153816 kubelet[2489]: I1213 01:17:24.153741    2489 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:17:24.153816 kubelet[2489]: I1213 01:17:24.153771    2489 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:17:24.153950 kubelet[2489]: E1213 01:17:24.153860    2489 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:17:24.179383 kubelet[2489]: I1213 01:17:24.179356    2489 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:17:24.179383 kubelet[2489]: I1213 01:17:24.179380    2489 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:17:24.179535 kubelet[2489]: I1213 01:17:24.179397    2489 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:17:24.179579 kubelet[2489]: I1213 01:17:24.179562    2489 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 01:17:24.179602 kubelet[2489]: I1213 01:17:24.179587    2489 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 01:17:24.179602 kubelet[2489]: I1213 01:17:24.179594    2489 policy_none.go:49] "None policy: Start"
Dec 13 01:17:24.180206 kubelet[2489]: I1213 01:17:24.180186    2489 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:17:24.180263 kubelet[2489]: I1213 01:17:24.180217    2489 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:17:24.180397 kubelet[2489]: I1213 01:17:24.180379    2489 state_mem.go:75] "Updated machine memory state"
Dec 13 01:17:24.184285 kubelet[2489]: I1213 01:17:24.184259    2489 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:17:24.184507 kubelet[2489]: I1213 01:17:24.184489    2489 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:17:24.239056 kubelet[2489]: I1213 01:17:24.238954    2489 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Dec 13 01:17:24.246109 kubelet[2489]: I1213 01:17:24.246004    2489 kubelet_node_status.go:112] "Node was previously registered" node="localhost"
Dec 13 01:17:24.246109 kubelet[2489]: I1213 01:17:24.246094    2489 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Dec 13 01:17:24.254097 kubelet[2489]: I1213 01:17:24.254062    2489 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost"
Dec 13 01:17:24.254539 kubelet[2489]: I1213 01:17:24.254504    2489 topology_manager.go:215] "Topology Admit Handler" podUID="c63910c03e8e1adc8e951711c367ca0f" podNamespace="kube-system" podName="kube-apiserver-localhost"
Dec 13 01:17:24.254605 kubelet[2489]: I1213 01:17:24.254576    2489 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Dec 13 01:17:24.435590 kubelet[2489]: I1213 01:17:24.435248    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost"
Dec 13 01:17:24.435590 kubelet[2489]: I1213 01:17:24.435294    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:24.435590 kubelet[2489]: I1213 01:17:24.435325    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:24.435590 kubelet[2489]: I1213 01:17:24.435348    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:24.435590 kubelet[2489]: I1213 01:17:24.435365    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c63910c03e8e1adc8e951711c367ca0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c63910c03e8e1adc8e951711c367ca0f\") " pod="kube-system/kube-apiserver-localhost"
Dec 13 01:17:24.436175 kubelet[2489]: I1213 01:17:24.435386    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:24.436175 kubelet[2489]: I1213 01:17:24.435407    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:24.436175 kubelet[2489]: I1213 01:17:24.435425    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:24.436175 kubelet[2489]: I1213 01:17:24.435444    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost"
Dec 13 01:17:24.565002 kubelet[2489]: E1213 01:17:24.564658    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:24.565002 kubelet[2489]: E1213 01:17:24.564737    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:24.565897 kubelet[2489]: E1213 01:17:24.565875    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:25.055471 sudo[1574]: pam_unix(sudo:session): session closed for user root
Dec 13 01:17:25.057127 sshd[1571]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:25.061230 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:42216.service: Deactivated successfully.
Dec 13 01:17:25.063061 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 01:17:25.063296 systemd[1]: session-5.scope: Consumed 6.220s CPU time, 189.8M memory peak, 0B memory swap peak.
Dec 13 01:17:25.063732 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit.
Dec 13 01:17:25.065107 systemd-logind[1418]: Removed session 5.
Dec 13 01:17:25.121164 kubelet[2489]: I1213 01:17:25.121095    2489 apiserver.go:52] "Watching apiserver"
Dec 13 01:17:25.134808 kubelet[2489]: I1213 01:17:25.134752    2489 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:17:25.167676 kubelet[2489]: E1213 01:17:25.167642    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:25.168002 kubelet[2489]: E1213 01:17:25.167971    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:25.173903 kubelet[2489]: E1213 01:17:25.173379    2489 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost"
Dec 13 01:17:25.173903 kubelet[2489]: E1213 01:17:25.173679    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:25.185703 kubelet[2489]: I1213 01:17:25.185669    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.185625433 podStartE2EDuration="1.185625433s" podCreationTimestamp="2024-12-13 01:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:25.185520513 +0000 UTC m=+1.126856361" watchObservedRunningTime="2024-12-13 01:17:25.185625433 +0000 UTC m=+1.126961241"
Dec 13 01:17:25.204964 kubelet[2489]: I1213 01:17:25.204628    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.204497313 podStartE2EDuration="1.204497313s" podCreationTimestamp="2024-12-13 01:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:25.195062673 +0000 UTC m=+1.136398481" watchObservedRunningTime="2024-12-13 01:17:25.204497313 +0000 UTC m=+1.145833121"
Dec 13 01:17:25.204964 kubelet[2489]: I1213 01:17:25.204813    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.204789433 podStartE2EDuration="1.204789433s" podCreationTimestamp="2024-12-13 01:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:25.204289793 +0000 UTC m=+1.145625681" watchObservedRunningTime="2024-12-13 01:17:25.204789433 +0000 UTC m=+1.146125241"
Dec 13 01:17:26.169522 kubelet[2489]: E1213 01:17:26.168687    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:26.169522 kubelet[2489]: E1213 01:17:26.169002    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:28.145219 kubelet[2489]: E1213 01:17:28.145177    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:29.052205 kubelet[2489]: E1213 01:17:29.052128    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:29.173404 kubelet[2489]: E1213 01:17:29.173178    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:31.465140 kubelet[2489]: E1213 01:17:31.465088    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:32.177231 kubelet[2489]: E1213 01:17:32.177205    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:33.178785 kubelet[2489]: E1213 01:17:33.178750    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:38.153271 kubelet[2489]: E1213 01:17:38.152941    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:39.538346 update_engine[1421]: I20241213 01:17:39.538264  1421 update_attempter.cc:509] Updating boot flags...
Dec 13 01:17:39.564871 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2564)
Dec 13 01:17:39.583890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2569)
Dec 13 01:17:39.802627 kubelet[2489]: I1213 01:17:39.802393    2489 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 01:17:39.802986 containerd[1443]: time="2024-12-13T01:17:39.802852780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 01:17:39.803251 kubelet[2489]: I1213 01:17:39.803158    2489 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 01:17:40.802060 kubelet[2489]: I1213 01:17:40.799611    2489 topology_manager.go:215] "Topology Admit Handler" podUID="b1319b12-0e73-4550-add4-a5aa5ea676f7" podNamespace="kube-system" podName="kube-proxy-q4brp"
Dec 13 01:17:40.812476 systemd[1]: Created slice kubepods-besteffort-podb1319b12_0e73_4550_add4_a5aa5ea676f7.slice - libcontainer container kubepods-besteffort-podb1319b12_0e73_4550_add4_a5aa5ea676f7.slice.
Dec 13 01:17:40.815076 kubelet[2489]: I1213 01:17:40.815030    2489 topology_manager.go:215] "Topology Admit Handler" podUID="06875b7a-85b5-446d-8448-2a8548340df4" podNamespace="kube-flannel" podName="kube-flannel-ds-hqjsm"
Dec 13 01:17:40.828933 systemd[1]: Created slice kubepods-burstable-pod06875b7a_85b5_446d_8448_2a8548340df4.slice - libcontainer container kubepods-burstable-pod06875b7a_85b5_446d_8448_2a8548340df4.slice.
Dec 13 01:17:40.849368 kubelet[2489]: I1213 01:17:40.849185    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1319b12-0e73-4550-add4-a5aa5ea676f7-kube-proxy\") pod \"kube-proxy-q4brp\" (UID: \"b1319b12-0e73-4550-add4-a5aa5ea676f7\") " pod="kube-system/kube-proxy-q4brp"
Dec 13 01:17:40.849368 kubelet[2489]: I1213 01:17:40.849235    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/06875b7a-85b5-446d-8448-2a8548340df4-run\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:40.849368 kubelet[2489]: I1213 01:17:40.849259    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/06875b7a-85b5-446d-8448-2a8548340df4-flannel-cfg\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:40.849368 kubelet[2489]: I1213 01:17:40.849279    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1319b12-0e73-4550-add4-a5aa5ea676f7-xtables-lock\") pod \"kube-proxy-q4brp\" (UID: \"b1319b12-0e73-4550-add4-a5aa5ea676f7\") " pod="kube-system/kube-proxy-q4brp"
Dec 13 01:17:40.849368 kubelet[2489]: I1213 01:17:40.849298    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1319b12-0e73-4550-add4-a5aa5ea676f7-lib-modules\") pod \"kube-proxy-q4brp\" (UID: \"b1319b12-0e73-4550-add4-a5aa5ea676f7\") " pod="kube-system/kube-proxy-q4brp"
Dec 13 01:17:40.849609 kubelet[2489]: I1213 01:17:40.849322    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjm4\" (UniqueName: \"kubernetes.io/projected/b1319b12-0e73-4550-add4-a5aa5ea676f7-kube-api-access-kgjm4\") pod \"kube-proxy-q4brp\" (UID: \"b1319b12-0e73-4550-add4-a5aa5ea676f7\") " pod="kube-system/kube-proxy-q4brp"
Dec 13 01:17:40.849609 kubelet[2489]: I1213 01:17:40.849343    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/06875b7a-85b5-446d-8448-2a8548340df4-cni-plugin\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:40.849609 kubelet[2489]: I1213 01:17:40.849360    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/06875b7a-85b5-446d-8448-2a8548340df4-cni\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:40.849609 kubelet[2489]: I1213 01:17:40.849378    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06875b7a-85b5-446d-8448-2a8548340df4-xtables-lock\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:40.849609 kubelet[2489]: I1213 01:17:40.849418    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h24b\" (UniqueName: \"kubernetes.io/projected/06875b7a-85b5-446d-8448-2a8548340df4-kube-api-access-5h24b\") pod \"kube-flannel-ds-hqjsm\" (UID: \"06875b7a-85b5-446d-8448-2a8548340df4\") " pod="kube-flannel/kube-flannel-ds-hqjsm"
Dec 13 01:17:41.122280 kubelet[2489]: E1213 01:17:41.121902    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:41.122514 containerd[1443]: time="2024-12-13T01:17:41.122454901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4brp,Uid:b1319b12-0e73-4550-add4-a5aa5ea676f7,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:41.137405 kubelet[2489]: E1213 01:17:41.136900    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:41.137642 containerd[1443]: time="2024-12-13T01:17:41.137569193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hqjsm,Uid:06875b7a-85b5-446d-8448-2a8548340df4,Namespace:kube-flannel,Attempt:0,}"
Dec 13 01:17:41.146914 containerd[1443]: time="2024-12-13T01:17:41.143835639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:17:41.147054 containerd[1443]: time="2024-12-13T01:17:41.146892481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:17:41.147054 containerd[1443]: time="2024-12-13T01:17:41.146906281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:41.147054 containerd[1443]: time="2024-12-13T01:17:41.146990761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:41.161393 containerd[1443]: time="2024-12-13T01:17:41.161296413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:17:41.161393 containerd[1443]: time="2024-12-13T01:17:41.161360973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:17:41.161588 containerd[1443]: time="2024-12-13T01:17:41.161379733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:41.161588 containerd[1443]: time="2024-12-13T01:17:41.161464653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:17:41.167039 systemd[1]: Started cri-containerd-690e33b13d2a999c7ecd3d413b9d5359da4086b162d23497bd6da39da948d412.scope - libcontainer container 690e33b13d2a999c7ecd3d413b9d5359da4086b162d23497bd6da39da948d412.
Dec 13 01:17:41.183380 systemd[1]: Started cri-containerd-863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0.scope - libcontainer container 863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0.
Dec 13 01:17:41.192421 containerd[1443]: time="2024-12-13T01:17:41.192313598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q4brp,Uid:b1319b12-0e73-4550-add4-a5aa5ea676f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"690e33b13d2a999c7ecd3d413b9d5359da4086b162d23497bd6da39da948d412\""
Dec 13 01:17:41.193000 kubelet[2489]: E1213 01:17:41.192980    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:41.197568 containerd[1443]: time="2024-12-13T01:17:41.197416443Z" level=info msg="CreateContainer within sandbox \"690e33b13d2a999c7ecd3d413b9d5359da4086b162d23497bd6da39da948d412\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 01:17:41.218814 containerd[1443]: time="2024-12-13T01:17:41.218753580Z" level=info msg="CreateContainer within sandbox \"690e33b13d2a999c7ecd3d413b9d5359da4086b162d23497bd6da39da948d412\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4bff7c00716e0a224204853713c6bd7f3014c873a6745704c7b0911f67a14efc\""
Dec 13 01:17:41.219878 containerd[1443]: time="2024-12-13T01:17:41.219757701Z" level=info msg="StartContainer for \"4bff7c00716e0a224204853713c6bd7f3014c873a6745704c7b0911f67a14efc\""
Dec 13 01:17:41.220125 containerd[1443]: time="2024-12-13T01:17:41.219978021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hqjsm,Uid:06875b7a-85b5-446d-8448-2a8548340df4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\""
Dec 13 01:17:41.220742 kubelet[2489]: E1213 01:17:41.220572    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:41.221921 containerd[1443]: time="2024-12-13T01:17:41.221889583Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\""
Dec 13 01:17:41.245985 systemd[1]: Started cri-containerd-4bff7c00716e0a224204853713c6bd7f3014c873a6745704c7b0911f67a14efc.scope - libcontainer container 4bff7c00716e0a224204853713c6bd7f3014c873a6745704c7b0911f67a14efc.
Dec 13 01:17:41.268290 containerd[1443]: time="2024-12-13T01:17:41.268248701Z" level=info msg="StartContainer for \"4bff7c00716e0a224204853713c6bd7f3014c873a6745704c7b0911f67a14efc\" returns successfully"
Dec 13 01:17:42.199800 kubelet[2489]: E1213 01:17:42.199774    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:42.286952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981862953.mount: Deactivated successfully.
Dec 13 01:17:42.323898 containerd[1443]: time="2024-12-13T01:17:42.323852111Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:42.325246 containerd[1443]: time="2024-12-13T01:17:42.325215072Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531"
Dec 13 01:17:42.326173 containerd[1443]: time="2024-12-13T01:17:42.326137993Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:42.328349 containerd[1443]: time="2024-12-13T01:17:42.328311914Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:42.329330 containerd[1443]: time="2024-12-13T01:17:42.329305835Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.107382252s"
Dec 13 01:17:42.329385 containerd[1443]: time="2024-12-13T01:17:42.329336235Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\""
Dec 13 01:17:42.331319 containerd[1443]: time="2024-12-13T01:17:42.331201757Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Dec 13 01:17:42.340870 containerd[1443]: time="2024-12-13T01:17:42.340830124Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f\""
Dec 13 01:17:42.342259 containerd[1443]: time="2024-12-13T01:17:42.341471844Z" level=info msg="StartContainer for \"a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f\""
Dec 13 01:17:42.342047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076016837.mount: Deactivated successfully.
Dec 13 01:17:42.368038 systemd[1]: Started cri-containerd-a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f.scope - libcontainer container a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f.
Dec 13 01:17:42.393865 containerd[1443]: time="2024-12-13T01:17:42.393752005Z" level=info msg="StartContainer for \"a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f\" returns successfully"
Dec 13 01:17:42.397479 systemd[1]: cri-containerd-a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f.scope: Deactivated successfully.
Dec 13 01:17:42.435888 containerd[1443]: time="2024-12-13T01:17:42.435807397Z" level=info msg="shim disconnected" id=a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f namespace=k8s.io
Dec 13 01:17:42.435888 containerd[1443]: time="2024-12-13T01:17:42.435874797Z" level=warning msg="cleaning up after shim disconnected" id=a040a59c6426d401f24386079ec2034ae060cb3f16afaf459db5e2ae8799ea5f namespace=k8s.io
Dec 13 01:17:42.435888 containerd[1443]: time="2024-12-13T01:17:42.435884077Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 01:17:43.202130 kubelet[2489]: E1213 01:17:43.201980    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:43.202702 containerd[1443]: time="2024-12-13T01:17:43.202671698Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\""
Dec 13 01:17:43.214244 kubelet[2489]: I1213 01:17:43.214211    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q4brp" podStartSLOduration=3.214175386 podStartE2EDuration="3.214175386s" podCreationTimestamp="2024-12-13 01:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:42.220703191 +0000 UTC m=+18.162039039" watchObservedRunningTime="2024-12-13 01:17:43.214175386 +0000 UTC m=+19.155511194"
Dec 13 01:17:44.285014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403689776.mount: Deactivated successfully.
Dec 13 01:17:45.800597 containerd[1443]: time="2024-12-13T01:17:45.800447257Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:45.809245 containerd[1443]: time="2024-12-13T01:17:45.809209543Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261"
Dec 13 01:17:45.811077 containerd[1443]: time="2024-12-13T01:17:45.811001384Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:45.814900 containerd[1443]: time="2024-12-13T01:17:45.814362786Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:17:45.815211 containerd[1443]: time="2024-12-13T01:17:45.815093307Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.612380289s"
Dec 13 01:17:45.815211 containerd[1443]: time="2024-12-13T01:17:45.815127027Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\""
Dec 13 01:17:45.826245 containerd[1443]: time="2024-12-13T01:17:45.826198034Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Dec 13 01:17:45.835343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016468012.mount: Deactivated successfully.
Dec 13 01:17:45.837965 containerd[1443]: time="2024-12-13T01:17:45.837666321Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f\""
Dec 13 01:17:45.838334 containerd[1443]: time="2024-12-13T01:17:45.838308361Z" level=info msg="StartContainer for \"46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f\""
Dec 13 01:17:45.868005 systemd[1]: Started cri-containerd-46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f.scope - libcontainer container 46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f.
Dec 13 01:17:45.894014 containerd[1443]: time="2024-12-13T01:17:45.893971157Z" level=info msg="StartContainer for \"46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f\" returns successfully"
Dec 13 01:17:45.900732 systemd[1]: cri-containerd-46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f.scope: Deactivated successfully.
Dec 13 01:17:45.914265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f-rootfs.mount: Deactivated successfully.
Dec 13 01:17:45.955132 kubelet[2489]: I1213 01:17:45.955078    2489 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 01:17:46.013894 kubelet[2489]: I1213 01:17:46.013611    2489 topology_manager.go:215] "Topology Admit Handler" podUID="069c8f31-5412-4e16-989b-c874c0f9e909" podNamespace="kube-system" podName="coredns-76f75df574-kvjc4"
Dec 13 01:17:46.013894 kubelet[2489]: I1213 01:17:46.013801    2489 topology_manager.go:215] "Topology Admit Handler" podUID="916a3560-3096-47d7-a0a9-3db5ba2723b6" podNamespace="kube-system" podName="coredns-76f75df574-dfbnw"
Dec 13 01:17:46.017163 containerd[1443]: time="2024-12-13T01:17:46.016863314Z" level=info msg="shim disconnected" id=46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f namespace=k8s.io
Dec 13 01:17:46.017163 containerd[1443]: time="2024-12-13T01:17:46.016982994Z" level=warning msg="cleaning up after shim disconnected" id=46d04ddbdfbdc5fc22b7489ad1e31147cd246090886656f5598be01d2cc5ed0f namespace=k8s.io
Dec 13 01:17:46.017163 containerd[1443]: time="2024-12-13T01:17:46.016997554Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 01:17:46.024242 systemd[1]: Created slice kubepods-burstable-pod069c8f31_5412_4e16_989b_c874c0f9e909.slice - libcontainer container kubepods-burstable-pod069c8f31_5412_4e16_989b_c874c0f9e909.slice.
Dec 13 01:17:46.032527 systemd[1]: Created slice kubepods-burstable-pod916a3560_3096_47d7_a0a9_3db5ba2723b6.slice - libcontainer container kubepods-burstable-pod916a3560_3096_47d7_a0a9_3db5ba2723b6.slice.
Dec 13 01:17:46.087177 kubelet[2489]: I1213 01:17:46.087140    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/916a3560-3096-47d7-a0a9-3db5ba2723b6-config-volume\") pod \"coredns-76f75df574-dfbnw\" (UID: \"916a3560-3096-47d7-a0a9-3db5ba2723b6\") " pod="kube-system/coredns-76f75df574-dfbnw"
Dec 13 01:17:46.087303 kubelet[2489]: I1213 01:17:46.087186    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/069c8f31-5412-4e16-989b-c874c0f9e909-config-volume\") pod \"coredns-76f75df574-kvjc4\" (UID: \"069c8f31-5412-4e16-989b-c874c0f9e909\") " pod="kube-system/coredns-76f75df574-kvjc4"
Dec 13 01:17:46.087303 kubelet[2489]: I1213 01:17:46.087208    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7v8\" (UniqueName: \"kubernetes.io/projected/916a3560-3096-47d7-a0a9-3db5ba2723b6-kube-api-access-ss7v8\") pod \"coredns-76f75df574-dfbnw\" (UID: \"916a3560-3096-47d7-a0a9-3db5ba2723b6\") " pod="kube-system/coredns-76f75df574-dfbnw"
Dec 13 01:17:46.087303 kubelet[2489]: I1213 01:17:46.087227    2489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnrc\" (UniqueName: \"kubernetes.io/projected/069c8f31-5412-4e16-989b-c874c0f9e909-kube-api-access-tqnrc\") pod \"coredns-76f75df574-kvjc4\" (UID: \"069c8f31-5412-4e16-989b-c874c0f9e909\") " pod="kube-system/coredns-76f75df574-kvjc4"
Dec 13 01:17:46.215907 kubelet[2489]: E1213 01:17:46.215705    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:46.219213 containerd[1443]: time="2024-12-13T01:17:46.219174514Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Dec 13 01:17:46.229717 containerd[1443]: time="2024-12-13T01:17:46.229668161Z" level=info msg="CreateContainer within sandbox \"863b5905de991da0c6bc73cf43ab0b59831fbe12d3f97dbb0545331664ad96d0\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"74acce3c4840f51884ad5e41f06d1dfd87a821c6288899cf02b04c650fdc55ea\""
Dec 13 01:17:46.231383 containerd[1443]: time="2024-12-13T01:17:46.230390241Z" level=info msg="StartContainer for \"74acce3c4840f51884ad5e41f06d1dfd87a821c6288899cf02b04c650fdc55ea\""
Dec 13 01:17:46.252989 systemd[1]: Started cri-containerd-74acce3c4840f51884ad5e41f06d1dfd87a821c6288899cf02b04c650fdc55ea.scope - libcontainer container 74acce3c4840f51884ad5e41f06d1dfd87a821c6288899cf02b04c650fdc55ea.
Dec 13 01:17:46.274484 containerd[1443]: time="2024-12-13T01:17:46.274443067Z" level=info msg="StartContainer for \"74acce3c4840f51884ad5e41f06d1dfd87a821c6288899cf02b04c650fdc55ea\" returns successfully"
Dec 13 01:17:46.329436 kubelet[2489]: E1213 01:17:46.329372    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:46.330955 containerd[1443]: time="2024-12-13T01:17:46.330907941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvjc4,Uid:069c8f31-5412-4e16-989b-c874c0f9e909,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:46.335953 kubelet[2489]: E1213 01:17:46.335924    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:46.336856 containerd[1443]: time="2024-12-13T01:17:46.336804624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfbnw,Uid:916a3560-3096-47d7-a0a9-3db5ba2723b6,Namespace:kube-system,Attempt:0,}"
Dec 13 01:17:46.400453 containerd[1443]: time="2024-12-13T01:17:46.400334822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvjc4,Uid:069c8f31-5412-4e16-989b-c874c0f9e909,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b27c7f6cfe0c9f461db82cec4d27b6aaa02421a6fae4cee391b7fc5020d4922\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Dec 13 01:17:46.401257 kubelet[2489]: E1213 01:17:46.401234    2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b27c7f6cfe0c9f461db82cec4d27b6aaa02421a6fae4cee391b7fc5020d4922\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Dec 13 01:17:46.401328 kubelet[2489]: E1213 01:17:46.401297    2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b27c7f6cfe0c9f461db82cec4d27b6aaa02421a6fae4cee391b7fc5020d4922\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-kvjc4"
Dec 13 01:17:46.401328 kubelet[2489]: E1213 01:17:46.401325    2489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b27c7f6cfe0c9f461db82cec4d27b6aaa02421a6fae4cee391b7fc5020d4922\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-kvjc4"
Dec 13 01:17:46.401450 kubelet[2489]: E1213 01:17:46.401432    2489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kvjc4_kube-system(069c8f31-5412-4e16-989b-c874c0f9e909)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kvjc4_kube-system(069c8f31-5412-4e16-989b-c874c0f9e909)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b27c7f6cfe0c9f461db82cec4d27b6aaa02421a6fae4cee391b7fc5020d4922\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-kvjc4" podUID="069c8f31-5412-4e16-989b-c874c0f9e909"
Dec 13 01:17:46.401815 containerd[1443]: time="2024-12-13T01:17:46.401670343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfbnw,Uid:916a3560-3096-47d7-a0a9-3db5ba2723b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c1053508387a39fcd94c949db2475ca5f710d500f843a3dc51b166234d31821\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Dec 13 01:17:46.401903 kubelet[2489]: E1213 01:17:46.401841    2489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1053508387a39fcd94c949db2475ca5f710d500f843a3dc51b166234d31821\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Dec 13 01:17:46.401903 kubelet[2489]: E1213 01:17:46.401878    2489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1053508387a39fcd94c949db2475ca5f710d500f843a3dc51b166234d31821\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dfbnw"
Dec 13 01:17:46.401903 kubelet[2489]: E1213 01:17:46.401894    2489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1053508387a39fcd94c949db2475ca5f710d500f843a3dc51b166234d31821\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dfbnw"
Dec 13 01:17:46.401971 kubelet[2489]: E1213 01:17:46.401930    2489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dfbnw_kube-system(916a3560-3096-47d7-a0a9-3db5ba2723b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dfbnw_kube-system(916a3560-3096-47d7-a0a9-3db5ba2723b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c1053508387a39fcd94c949db2475ca5f710d500f843a3dc51b166234d31821\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-dfbnw" podUID="916a3560-3096-47d7-a0a9-3db5ba2723b6"
Dec 13 01:17:47.219929 kubelet[2489]: E1213 01:17:47.219113    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:47.232859 kubelet[2489]: I1213 01:17:47.232446    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hqjsm" podStartSLOduration=2.638363063 podStartE2EDuration="7.232396508s" podCreationTimestamp="2024-12-13 01:17:40 +0000 UTC" firstStartedPulling="2024-12-13 01:17:41.221293262 +0000 UTC m=+17.162629070" lastFinishedPulling="2024-12-13 01:17:45.815326707 +0000 UTC m=+21.756662515" observedRunningTime="2024-12-13 01:17:47.232259308 +0000 UTC m=+23.173595116" watchObservedRunningTime="2024-12-13 01:17:47.232396508 +0000 UTC m=+23.173732316"
Dec 13 01:17:47.390363 systemd-networkd[1375]: flannel.1: Link UP
Dec 13 01:17:47.390371 systemd-networkd[1375]: flannel.1: Gained carrier
Dec 13 01:17:48.220840 kubelet[2489]: E1213 01:17:48.220688    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:17:48.745011 systemd-networkd[1375]: flannel.1: Gained IPv6LL
Dec 13 01:17:52.505509 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:44716.service - OpenSSH per-connection server daemon (10.0.0.1:44716).
Dec 13 01:17:52.541793 sshd[3163]: Accepted publickey for core from 10.0.0.1 port 44716 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:52.543305 sshd[3163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:52.548364 systemd-logind[1418]: New session 6 of user core.
Dec 13 01:17:52.563077 systemd[1]: Started session-6.scope - Session 6 of User core.
Dec 13 01:17:52.686529 sshd[3163]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:52.689364 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:44716.service: Deactivated successfully.
Dec 13 01:17:52.691203 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 01:17:52.692874 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit.
Dec 13 01:17:52.694241 systemd-logind[1418]: Removed session 6.
Dec 13 01:17:57.700692 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732).
Dec 13 01:17:57.736139 sshd[3199]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:17:57.737483 sshd[3199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:17:57.741153 systemd-logind[1418]: New session 7 of user core.
Dec 13 01:17:57.753005 systemd[1]: Started session-7.scope - Session 7 of User core.
Dec 13 01:17:57.868335 sshd[3199]: pam_unix(sshd:session): session closed for user core
Dec 13 01:17:57.871786 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:44732.service: Deactivated successfully.
Dec 13 01:17:57.873377 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 01:17:57.874965 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit.
Dec 13 01:17:57.875921 systemd-logind[1418]: Removed session 7.
Dec 13 01:18:00.155276 kubelet[2489]: E1213 01:18:00.154971    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:00.155659 containerd[1443]: time="2024-12-13T01:18:00.155603721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvjc4,Uid:069c8f31-5412-4e16-989b-c874c0f9e909,Namespace:kube-system,Attempt:0,}"
Dec 13 01:18:00.179892 systemd-networkd[1375]: cni0: Link UP
Dec 13 01:18:00.179899 systemd-networkd[1375]: cni0: Gained carrier
Dec 13 01:18:00.182637 systemd-networkd[1375]: cni0: Lost carrier
Dec 13 01:18:00.187905 systemd-networkd[1375]: veth07326b3a: Link UP
Dec 13 01:18:00.192115 kernel: cni0: port 1(veth07326b3a) entered blocking state
Dec 13 01:18:00.192169 kernel: cni0: port 1(veth07326b3a) entered disabled state
Dec 13 01:18:00.192187 kernel: veth07326b3a: entered allmulticast mode
Dec 13 01:18:00.193091 kernel: veth07326b3a: entered promiscuous mode
Dec 13 01:18:00.197087 kernel: cni0: port 1(veth07326b3a) entered blocking state
Dec 13 01:18:00.197139 kernel: cni0: port 1(veth07326b3a) entered forwarding state
Dec 13 01:18:00.202250 kernel: cni0: port 1(veth07326b3a) entered disabled state
Dec 13 01:18:00.219847 kernel: cni0: port 1(veth07326b3a) entered blocking state
Dec 13 01:18:00.219954 kernel: cni0: port 1(veth07326b3a) entered forwarding state
Dec 13 01:18:00.220662 systemd-networkd[1375]: veth07326b3a: Gained carrier
Dec 13 01:18:00.220933 systemd-networkd[1375]: cni0: Gained carrier
Dec 13 01:18:00.222528 containerd[1443]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"}
Dec 13 01:18:00.222528 containerd[1443]: delegateAdd: netconf sent to delegate plugin:
Dec 13 01:18:00.240388 containerd[1443]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:18:00.240295822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:18:00.240495 containerd[1443]: time="2024-12-13T01:18:00.240409462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:18:00.240495 containerd[1443]: time="2024-12-13T01:18:00.240436542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:18:00.240582 containerd[1443]: time="2024-12-13T01:18:00.240541742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:18:00.261974 systemd[1]: Started cri-containerd-c7fdd740ebf1c6b99de71fc561f9d185a32f25afc41e6ab7b70ba95d144cd685.scope - libcontainer container c7fdd740ebf1c6b99de71fc561f9d185a32f25afc41e6ab7b70ba95d144cd685.
Dec 13 01:18:00.270641 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 01:18:00.287656 containerd[1443]: time="2024-12-13T01:18:00.287615273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kvjc4,Uid:069c8f31-5412-4e16-989b-c874c0f9e909,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fdd740ebf1c6b99de71fc561f9d185a32f25afc41e6ab7b70ba95d144cd685\""
Dec 13 01:18:00.288892 kubelet[2489]: E1213 01:18:00.288523    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:00.291835 containerd[1443]: time="2024-12-13T01:18:00.291784674Z" level=info msg="CreateContainer within sandbox \"c7fdd740ebf1c6b99de71fc561f9d185a32f25afc41e6ab7b70ba95d144cd685\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:18:00.305787 containerd[1443]: time="2024-12-13T01:18:00.305734157Z" level=info msg="CreateContainer within sandbox \"c7fdd740ebf1c6b99de71fc561f9d185a32f25afc41e6ab7b70ba95d144cd685\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abcf214c457516ca46a5f4ff8ffa9a5570223a30d01b9e5fff6a79fc97a246bf\""
Dec 13 01:18:00.306613 containerd[1443]: time="2024-12-13T01:18:00.306526197Z" level=info msg="StartContainer for \"abcf214c457516ca46a5f4ff8ffa9a5570223a30d01b9e5fff6a79fc97a246bf\""
Dec 13 01:18:00.330989 systemd[1]: Started cri-containerd-abcf214c457516ca46a5f4ff8ffa9a5570223a30d01b9e5fff6a79fc97a246bf.scope - libcontainer container abcf214c457516ca46a5f4ff8ffa9a5570223a30d01b9e5fff6a79fc97a246bf.
Dec 13 01:18:00.352678 containerd[1443]: time="2024-12-13T01:18:00.352631249Z" level=info msg="StartContainer for \"abcf214c457516ca46a5f4ff8ffa9a5570223a30d01b9e5fff6a79fc97a246bf\" returns successfully"
Dec 13 01:18:01.154753 kubelet[2489]: E1213 01:18:01.154718    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:01.155223 containerd[1443]: time="2024-12-13T01:18:01.155127240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfbnw,Uid:916a3560-3096-47d7-a0a9-3db5ba2723b6,Namespace:kube-system,Attempt:0,}"
Dec 13 01:18:01.163866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896279966.mount: Deactivated successfully.
Dec 13 01:18:01.179277 systemd-networkd[1375]: vethc9d0caee: Link UP
Dec 13 01:18:01.181492 kernel: cni0: port 2(vethc9d0caee) entered blocking state
Dec 13 01:18:01.181558 kernel: cni0: port 2(vethc9d0caee) entered disabled state
Dec 13 01:18:01.181584 kernel: vethc9d0caee: entered allmulticast mode
Dec 13 01:18:01.181613 kernel: vethc9d0caee: entered promiscuous mode
Dec 13 01:18:01.182877 kernel: cni0: port 2(vethc9d0caee) entered blocking state
Dec 13 01:18:01.182916 kernel: cni0: port 2(vethc9d0caee) entered forwarding state
Dec 13 01:18:01.185878 kernel: cni0: port 2(vethc9d0caee) entered disabled state
Dec 13 01:18:01.191904 kernel: cni0: port 2(vethc9d0caee) entered blocking state
Dec 13 01:18:01.191965 kernel: cni0: port 2(vethc9d0caee) entered forwarding state
Dec 13 01:18:01.192450 systemd-networkd[1375]: vethc9d0caee: Gained carrier
Dec 13 01:18:01.194132 containerd[1443]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"}
Dec 13 01:18:01.194132 containerd[1443]: delegateAdd: netconf sent to delegate plugin:
Dec 13 01:18:01.209524 containerd[1443]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:18:01.209432812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:18:01.209524 containerd[1443]: time="2024-12-13T01:18:01.209479972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:18:01.209524 containerd[1443]: time="2024-12-13T01:18:01.209491652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:18:01.209665 containerd[1443]: time="2024-12-13T01:18:01.209558572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:18:01.234060 systemd[1]: Started cri-containerd-932babd3316f313718b0ad1688353275d37088ebf0aaa337b52d09651d18dd74.scope - libcontainer container 932babd3316f313718b0ad1688353275d37088ebf0aaa337b52d09651d18dd74.
Dec 13 01:18:01.243313 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 01:18:01.252914 kubelet[2489]: E1213 01:18:01.252889    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:01.263841 kubelet[2489]: I1213 01:18:01.263053    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kvjc4" podStartSLOduration=21.263016944 podStartE2EDuration="21.263016944s" podCreationTimestamp="2024-12-13 01:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:01.262311424 +0000 UTC m=+37.203647232" watchObservedRunningTime="2024-12-13 01:18:01.263016944 +0000 UTC m=+37.204352712"
Dec 13 01:18:01.268114 containerd[1443]: time="2024-12-13T01:18:01.268068105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfbnw,Uid:916a3560-3096-47d7-a0a9-3db5ba2723b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"932babd3316f313718b0ad1688353275d37088ebf0aaa337b52d09651d18dd74\""
Dec 13 01:18:01.270988 kubelet[2489]: E1213 01:18:01.269407    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:01.271337 containerd[1443]: time="2024-12-13T01:18:01.271296226Z" level=info msg="CreateContainer within sandbox \"932babd3316f313718b0ad1688353275d37088ebf0aaa337b52d09651d18dd74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:18:01.284853 containerd[1443]: time="2024-12-13T01:18:01.284758429Z" level=info msg="CreateContainer within sandbox \"932babd3316f313718b0ad1688353275d37088ebf0aaa337b52d09651d18dd74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6545a172961e2a00da3269549964892216c736f213b28f20501463cca31ecfe\""
Dec 13 01:18:01.285425 containerd[1443]: time="2024-12-13T01:18:01.285388749Z" level=info msg="StartContainer for \"a6545a172961e2a00da3269549964892216c736f213b28f20501463cca31ecfe\""
Dec 13 01:18:01.308033 systemd[1]: Started cri-containerd-a6545a172961e2a00da3269549964892216c736f213b28f20501463cca31ecfe.scope - libcontainer container a6545a172961e2a00da3269549964892216c736f213b28f20501463cca31ecfe.
Dec 13 01:18:01.335689 containerd[1443]: time="2024-12-13T01:18:01.335421040Z" level=info msg="StartContainer for \"a6545a172961e2a00da3269549964892216c736f213b28f20501463cca31ecfe\" returns successfully"
Dec 13 01:18:01.544981 systemd-networkd[1375]: veth07326b3a: Gained IPv6LL
Dec 13 01:18:02.056968 systemd-networkd[1375]: cni0: Gained IPv6LL
Dec 13 01:18:02.258780 kubelet[2489]: E1213 01:18:02.258710    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:02.258780 kubelet[2489]: E1213 01:18:02.258756    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:02.440958 systemd-networkd[1375]: vethc9d0caee: Gained IPv6LL
Dec 13 01:18:02.885063 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944).
Dec 13 01:18:02.923024 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:02.924594 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:02.928856 systemd-logind[1418]: New session 8 of user core.
Dec 13 01:18:02.945995 systemd[1]: Started session-8.scope - Session 8 of User core.
Dec 13 01:18:03.057248 sshd[3464]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:03.070353 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:33944.service: Deactivated successfully.
Dec 13 01:18:03.073860 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 01:18:03.075239 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit.
Dec 13 01:18:03.076204 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:33948.service - OpenSSH per-connection server daemon (10.0.0.1:33948).
Dec 13 01:18:03.077183 systemd-logind[1418]: Removed session 8.
Dec 13 01:18:03.112378 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 33948 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:03.114023 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:03.117719 systemd-logind[1418]: New session 9 of user core.
Dec 13 01:18:03.124962 systemd[1]: Started session-9.scope - Session 9 of User core.
Dec 13 01:18:03.277645 sshd[3480]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:03.289212 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:33956.service - OpenSSH per-connection server daemon (10.0.0.1:33956).
Dec 13 01:18:03.292154 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:33948.service: Deactivated successfully.
Dec 13 01:18:03.293790 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 01:18:03.295060 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit.
Dec 13 01:18:03.297176 systemd-logind[1418]: Removed session 9.
Dec 13 01:18:03.331448 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 33956 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:03.332675 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:03.336878 systemd-logind[1418]: New session 10 of user core.
Dec 13 01:18:03.352004 systemd[1]: Started session-10.scope - Session 10 of User core.
Dec 13 01:18:03.463146 sshd[3490]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:03.466378 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:33956.service: Deactivated successfully.
Dec 13 01:18:03.467988 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 01:18:03.472043 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit.
Dec 13 01:18:03.473053 systemd-logind[1418]: Removed session 10.
Dec 13 01:18:06.336806 kubelet[2489]: E1213 01:18:06.336755    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:06.360739 kubelet[2489]: I1213 01:18:06.360468    2489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dfbnw" podStartSLOduration=26.36043046 podStartE2EDuration="26.36043046s" podCreationTimestamp="2024-12-13 01:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:02.270213048 +0000 UTC m=+38.211548856" watchObservedRunningTime="2024-12-13 01:18:06.36043046 +0000 UTC m=+42.301766268"
Dec 13 01:18:07.272209 kubelet[2489]: E1213 01:18:07.272175    2489 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Dec 13 01:18:08.471624 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:33970.service - OpenSSH per-connection server daemon (10.0.0.1:33970).
Dec 13 01:18:08.507317 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 33970 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:08.508920 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:08.513145 systemd-logind[1418]: New session 11 of user core.
Dec 13 01:18:08.519966 systemd[1]: Started session-11.scope - Session 11 of User core.
Dec 13 01:18:08.625873 sshd[3534]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:08.636306 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:33970.service: Deactivated successfully.
Dec 13 01:18:08.639038 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 01:18:08.640498 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit.
Dec 13 01:18:08.649283 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:33986.service - OpenSSH per-connection server daemon (10.0.0.1:33986).
Dec 13 01:18:08.650321 systemd-logind[1418]: Removed session 11.
Dec 13 01:18:08.680750 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 33986 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:08.682071 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:08.687222 systemd-logind[1418]: New session 12 of user core.
Dec 13 01:18:08.699000 systemd[1]: Started session-12.scope - Session 12 of User core.
Dec 13 01:18:08.975149 sshd[3549]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:08.988452 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:33986.service: Deactivated successfully.
Dec 13 01:18:08.990715 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 01:18:08.992426 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit.
Dec 13 01:18:08.993953 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:33990.service - OpenSSH per-connection server daemon (10.0.0.1:33990).
Dec 13 01:18:08.994933 systemd-logind[1418]: Removed session 12.
Dec 13 01:18:09.034084 sshd[3561]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:09.035470 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:09.039903 systemd-logind[1418]: New session 13 of user core.
Dec 13 01:18:09.050482 systemd[1]: Started session-13.scope - Session 13 of User core.
Dec 13 01:18:10.329415 sshd[3561]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:10.340536 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:33990.service: Deactivated successfully.
Dec 13 01:18:10.342634 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 01:18:10.346244 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit.
Dec 13 01:18:10.360303 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:33998.service - OpenSSH per-connection server daemon (10.0.0.1:33998).
Dec 13 01:18:10.361639 systemd-logind[1418]: Removed session 13.
Dec 13 01:18:10.394082 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 33998 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:10.395516 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:10.399444 systemd-logind[1418]: New session 14 of user core.
Dec 13 01:18:10.407107 systemd[1]: Started session-14.scope - Session 14 of User core.
Dec 13 01:18:10.621648 sshd[3582]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:10.633804 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:33998.service: Deactivated successfully.
Dec 13 01:18:10.635751 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 01:18:10.639836 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit.
Dec 13 01:18:10.651165 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:34004.service - OpenSSH per-connection server daemon (10.0.0.1:34004).
Dec 13 01:18:10.652022 systemd-logind[1418]: Removed session 14.
Dec 13 01:18:10.682294 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 34004 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:10.683666 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:10.687552 systemd-logind[1418]: New session 15 of user core.
Dec 13 01:18:10.695010 systemd[1]: Started session-15.scope - Session 15 of User core.
Dec 13 01:18:10.801014 sshd[3595]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:10.804336 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:34004.service: Deactivated successfully.
Dec 13 01:18:10.806212 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 01:18:10.806885 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit.
Dec 13 01:18:10.807655 systemd-logind[1418]: Removed session 15.
Dec 13 01:18:15.814218 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020).
Dec 13 01:18:15.846604 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:15.847912 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:15.854559 systemd-logind[1418]: New session 16 of user core.
Dec 13 01:18:15.872458 systemd[1]: Started session-16.scope - Session 16 of User core.
Dec 13 01:18:15.983221 sshd[3634]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:15.986765 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:33020.service: Deactivated successfully.
Dec 13 01:18:15.988623 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 01:18:15.989353 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit.
Dec 13 01:18:15.990217 systemd-logind[1418]: Removed session 16.
Dec 13 01:18:20.997581 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026).
Dec 13 01:18:21.042695 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:21.044008 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:21.047616 systemd-logind[1418]: New session 17 of user core.
Dec 13 01:18:21.059034 systemd[1]: Started session-17.scope - Session 17 of User core.
Dec 13 01:18:21.171213 sshd[3673]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:21.173813 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 01:18:21.175098 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:33026.service: Deactivated successfully.
Dec 13 01:18:21.177205 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit.
Dec 13 01:18:21.177944 systemd-logind[1418]: Removed session 17.
Dec 13 01:18:26.183444 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928).
Dec 13 01:18:26.219561 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:26.220773 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:26.224424 systemd-logind[1418]: New session 18 of user core.
Dec 13 01:18:26.232006 systemd[1]: Started session-18.scope - Session 18 of User core.
Dec 13 01:18:26.341657 sshd[3710]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:26.345285 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:53928.service: Deactivated successfully.
Dec 13 01:18:26.347169 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 01:18:26.347841 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit.
Dec 13 01:18:26.348590 systemd-logind[1418]: Removed session 18.
Dec 13 01:18:31.351227 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:53944.service - OpenSSH per-connection server daemon (10.0.0.1:53944).
Dec 13 01:18:31.388727 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 53944 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q
Dec 13 01:18:31.389991 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:18:31.393425 systemd-logind[1418]: New session 19 of user core.
Dec 13 01:18:31.409977 systemd[1]: Started session-19.scope - Session 19 of User core.
Dec 13 01:18:31.517963 sshd[3746]: pam_unix(sshd:session): session closed for user core
Dec 13 01:18:31.521207 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:53944.service: Deactivated successfully.
Dec 13 01:18:31.523811 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 01:18:31.525854 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit.
Dec 13 01:18:31.527356 systemd-logind[1418]: Removed session 19.