Dec 13 01:33:10.925733 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:33:10.925754 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:33:10.925764 kernel: KASLR enabled Dec 13 01:33:10.925770 kernel: efi: EFI v2.7 by EDK II Dec 13 01:33:10.925775 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:33:10.925781 kernel: random: crng init done Dec 13 01:33:10.925788 kernel: ACPI: Early table checksum verification disabled Dec 13 01:33:10.925794 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:33:10.925800 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:33:10.925808 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925814 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925820 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925826 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925832 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925840 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925847 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925854 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925861 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:33:10.925867 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:33:10.925873 kernel: NUMA: Failed to initialise from firmware Dec 13 01:33:10.925880 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:33:10.925887 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:33:10.925893 kernel: Zone ranges: Dec 13 01:33:10.925900 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:33:10.925906 kernel: DMA32 empty Dec 13 01:33:10.925913 kernel: Normal empty Dec 13 01:33:10.925919 kernel: Movable zone start for each node Dec 13 01:33:10.925926 kernel: Early memory node ranges Dec 13 01:33:10.925933 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:33:10.925939 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:33:10.925945 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:33:10.925951 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:33:10.925958 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:33:10.925964 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:33:10.925970 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:33:10.925977 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:33:10.925983 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:33:10.925990 kernel: psci: probing for conduit method from ACPI. Dec 13 01:33:10.925997 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:33:10.926003 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:33:10.926012 kernel: psci: Trusted OS migration not required Dec 13 01:33:10.926019 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:33:10.926026 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:33:10.926034 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:33:10.926041 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:33:10.926048 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:33:10.926055 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:33:10.926061 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:33:10.926068 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:33:10.926075 kernel: CPU features: detected: Spectre-v4 Dec 13 01:33:10.926081 kernel: CPU features: detected: Spectre-BHB Dec 13 01:33:10.926088 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:33:10.926095 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:33:10.926103 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:33:10.926110 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:33:10.926117 kernel: alternatives: applying boot alternatives Dec 13 01:33:10.926124 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:33:10.926131 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:33:10.926138 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:33:10.926145 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:33:10.926152 kernel: Fallback order for Node 0: 0 Dec 13 01:33:10.926159 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:33:10.926165 kernel: Policy zone: DMA Dec 13 01:33:10.926172 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:33:10.926180 kernel: software IO TLB: area num 4. Dec 13 01:33:10.926187 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:33:10.926209 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:33:10.926216 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:33:10.926223 kernel: trace event string verifier disabled Dec 13 01:33:10.926230 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:33:10.926238 kernel: rcu: RCU event tracing is enabled. Dec 13 01:33:10.926245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:33:10.926252 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:33:10.926259 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:33:10.926266 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:33:10.926273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:33:10.926281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:33:10.926288 kernel: GICv3: 256 SPIs implemented Dec 13 01:33:10.926295 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:33:10.926301 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:33:10.926308 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:33:10.926315 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:33:10.926322 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:33:10.926329 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:33:10.926336 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:33:10.926342 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:33:10.926349 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:33:10.926357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:33:10.926364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:33:10.926371 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:33:10.926378 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:33:10.926385 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:33:10.926392 kernel: arm-pv: using stolen time PV Dec 13 01:33:10.926399 kernel: Console: colour dummy device 80x25 Dec 13 01:33:10.926406 kernel: ACPI: Core revision 20230628 Dec 13 01:33:10.926413 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:33:10.926420 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:33:10.926429 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:33:10.926436 kernel: landlock: Up and running. Dec 13 01:33:10.926442 kernel: SELinux: Initializing. Dec 13 01:33:10.926455 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:33:10.926463 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:33:10.926470 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:33:10.926477 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:33:10.926484 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:33:10.926491 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:33:10.926499 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:33:10.926506 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:33:10.926513 kernel: Remapping and enabling EFI services. Dec 13 01:33:10.926520 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:33:10.926527 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:33:10.926534 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:33:10.926541 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:33:10.926548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:33:10.926555 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:33:10.926562 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:33:10.926571 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:33:10.926578 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:33:10.926589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:33:10.926598 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:33:10.926605 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:33:10.926612 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:33:10.926620 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:33:10.926627 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:33:10.926634 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:33:10.926643 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:33:10.926650 kernel: SMP: Total of 4 processors activated. Dec 13 01:33:10.926657 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:33:10.926665 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:33:10.926672 kernel: CPU features: detected: Common not Private translations Dec 13 01:33:10.926679 kernel: CPU features: detected: CRC32 instructions Dec 13 01:33:10.926694 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:33:10.926702 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:33:10.926711 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:33:10.926719 kernel: CPU features: detected: Privileged Access Never Dec 13 01:33:10.926726 kernel: CPU features: detected: RAS Extension Support Dec 13 01:33:10.926734 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:33:10.926741 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:33:10.926748 kernel: alternatives: applying system-wide alternatives Dec 13 01:33:10.926755 kernel: devtmpfs: initialized Dec 13 01:33:10.926763 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:33:10.926770 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:33:10.926779 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:33:10.926786 kernel: SMBIOS 3.0.0 present. Dec 13 01:33:10.926793 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:33:10.926800 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:33:10.926808 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:33:10.926815 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:33:10.926823 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:33:10.926830 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:33:10.926837 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:33:10.926846 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:33:10.926853 kernel: cpuidle: using governor menu Dec 13 01:33:10.926861 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:33:10.926868 kernel: ASID allocator initialised with 32768 entries Dec 13 01:33:10.926876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:33:10.926883 kernel: Serial: AMBA PL011 UART driver Dec 13 01:33:10.926890 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:33:10.926898 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:33:10.926905 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:33:10.926914 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:33:10.926921 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:33:10.926929 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:33:10.926936 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:33:10.926943 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:33:10.926951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:33:10.926958 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:33:10.926965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:33:10.926973 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:33:10.926981 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:33:10.926989 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:33:10.927001 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:33:10.927009 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:33:10.927016 kernel: ACPI: Interpreter enabled Dec 13 01:33:10.927024 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:33:10.927031 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:33:10.927038 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:33:10.927045 kernel: printk: console [ttyAMA0] enabled Dec 13 01:33:10.927054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:33:10.927188 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:33:10.927265 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:33:10.927333 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:33:10.927400 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:33:10.927475 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:33:10.927486 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:33:10.927496 kernel: PCI host bridge to bus 0000:00 Dec 13 01:33:10.927586 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:33:10.927651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:33:10.927754 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:33:10.927821 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:33:10.927906 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:33:10.927985 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:33:10.928062 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:33:10.928134 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:33:10.928207 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:33:10.928277 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:33:10.928347 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:33:10.928416 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:33:10.928486 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:33:10.928552 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:33:10.928614 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:33:10.928624 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:33:10.928632 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:33:10.928640 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:33:10.928648 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:33:10.928656 kernel: iommu: Default domain type: Translated Dec 13 01:33:10.928663 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:33:10.928673 kernel: efivars: Registered efivars operations Dec 13 01:33:10.928680 kernel: vgaarb: loaded Dec 13 01:33:10.928720 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:33:10.928727 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:33:10.928735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:33:10.928742 kernel: pnp: PnP ACPI init Dec 13 01:33:10.928829 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:33:10.928840 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:33:10.928850 kernel: NET: Registered PF_INET protocol family Dec 13 01:33:10.928857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:33:10.928865 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:33:10.928872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:33:10.928880 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:33:10.928887 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:33:10.928894 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:33:10.928902 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:33:10.928909 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:33:10.928918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:33:10.928926 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:33:10.928933 kernel: kvm [1]: HYP mode not available Dec 13 01:33:10.928940 kernel: Initialise system trusted keyrings Dec 13 01:33:10.928947 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:33:10.928955 kernel: Key type asymmetric registered Dec 13 01:33:10.928962 kernel: Asymmetric key parser 'x509' registered Dec 13 01:33:10.928969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:33:10.928976 kernel: io scheduler mq-deadline registered Dec 13 01:33:10.928985 kernel: io scheduler kyber registered Dec 13 01:33:10.928992 kernel: io scheduler bfq registered Dec 13 01:33:10.928999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:33:10.929007 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:33:10.929014 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:33:10.929087 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:33:10.929097 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:33:10.929105 kernel: thunder_xcv, ver 1.0 Dec 13 01:33:10.929112 kernel: thunder_bgx, ver 1.0 Dec 13 01:33:10.929121 kernel: nicpf, ver 1.0 Dec 13 01:33:10.929129 kernel: nicvf, ver 1.0 Dec 13 01:33:10.929203 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:33:10.929272 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:33:10 UTC (1734053590) Dec 13 01:33:10.929282 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:33:10.929290 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:33:10.929297 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:33:10.929305 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:33:10.929314 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:33:10.929321 kernel: Segment Routing with IPv6 Dec 13 01:33:10.929329 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:33:10.929336 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:33:10.929343 kernel: Key type dns_resolver registered Dec 13 01:33:10.929351 kernel: registered taskstats version 1 Dec 13 01:33:10.929358 kernel: Loading compiled-in X.509 certificates Dec 13 01:33:10.929366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:33:10.929373 kernel: Key type .fscrypt registered Dec 13 01:33:10.929382 kernel: Key type fscrypt-provisioning registered Dec 13 01:33:10.929390 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:33:10.929397 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:33:10.929404 kernel: ima: No architecture policies found Dec 13 01:33:10.929412 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:33:10.929419 kernel: clk: Disabling unused clocks Dec 13 01:33:10.929426 kernel: Freeing unused kernel memory: 39360K Dec 13 01:33:10.929433 kernel: Run /init as init process Dec 13 01:33:10.929441 kernel: with arguments: Dec 13 01:33:10.929457 kernel: /init Dec 13 01:33:10.929464 kernel: with environment: Dec 13 01:33:10.929471 kernel: HOME=/ Dec 13 01:33:10.929479 kernel: TERM=linux Dec 13 01:33:10.929486 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:33:10.929495 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:33:10.929505 systemd[1]: Detected virtualization kvm. Dec 13 01:33:10.929513 systemd[1]: Detected architecture arm64. Dec 13 01:33:10.929523 systemd[1]: Running in initrd. Dec 13 01:33:10.929531 systemd[1]: No hostname configured, using default hostname. Dec 13 01:33:10.929538 systemd[1]: Hostname set to . Dec 13 01:33:10.929546 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:33:10.929556 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:33:10.929564 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:10.929573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:10.929581 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:33:10.929591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:33:10.929599 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:33:10.929608 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:33:10.929617 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:33:10.929625 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:33:10.929634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:10.929643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:10.929652 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:33:10.929660 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:33:10.929668 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:33:10.929676 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:33:10.929684 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:33:10.929784 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:33:10.929793 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:33:10.929801 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:33:10.929811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:10.929819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:10.929827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:10.929835 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:33:10.929843 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:33:10.929851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:33:10.929860 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:33:10.929869 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:33:10.929877 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:33:10.929887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:33:10.929896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:10.929905 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:33:10.929913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:10.929922 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:33:10.929930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:33:10.929940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:10.929972 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:33:10.929993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:10.930002 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:10.930011 systemd-journald[237]: Journal started Dec 13 01:33:10.930030 systemd-journald[237]: Runtime Journal (/run/log/journal/5fcf1a8ecef849f2ae75f25c4f74c96d) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:33:10.920874 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:33:10.933715 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:33:10.933746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:33:10.938979 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:33:10.939878 kernel: Bridge firewalling registered Dec 13 01:33:10.946879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:33:10.948627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:33:10.950535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:10.952620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:10.954683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:10.958109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:33:10.959828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:10.961099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:10.970478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:10.973170 dracut-cmdline[267]: dracut-dracut-053 Dec 13 01:33:10.979495 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:33:10.978886 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:33:11.007093 systemd-resolved[281]: Positive Trust Anchors: Dec 13 01:33:11.007111 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:33:11.007143 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:33:11.011790 systemd-resolved[281]: Defaulting to hostname 'linux'. Dec 13 01:33:11.013008 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:33:11.016968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:11.050720 kernel: SCSI subsystem initialized Dec 13 01:33:11.054701 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:33:11.062722 kernel: iscsi: registered transport (tcp) Dec 13 01:33:11.075043 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:33:11.075059 kernel: QLogic iSCSI HBA Driver Dec 13 01:33:11.116044 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:33:11.126830 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:33:11.144874 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:33:11.144925 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:33:11.146512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:33:11.193736 kernel: raid6: neonx8 gen() 15786 MB/s Dec 13 01:33:11.210710 kernel: raid6: neonx4 gen() 15659 MB/s Dec 13 01:33:11.227719 kernel: raid6: neonx2 gen() 13300 MB/s Dec 13 01:33:11.244719 kernel: raid6: neonx1 gen() 10486 MB/s Dec 13 01:33:11.261717 kernel: raid6: int64x8 gen() 6962 MB/s Dec 13 01:33:11.278717 kernel: raid6: int64x4 gen() 7341 MB/s Dec 13 01:33:11.295709 kernel: raid6: int64x2 gen() 6131 MB/s Dec 13 01:33:11.312851 kernel: raid6: int64x1 gen() 5053 MB/s Dec 13 01:33:11.312895 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Dec 13 01:33:11.330808 kernel: raid6: .... xor() 11921 MB/s, rmw enabled Dec 13 01:33:11.330826 kernel: raid6: using neon recovery algorithm Dec 13 01:33:11.336209 kernel: xor: measuring software checksum speed Dec 13 01:33:11.336226 kernel: 8regs : 19759 MB/sec Dec 13 01:33:11.336895 kernel: 32regs : 19655 MB/sec Dec 13 01:33:11.338150 kernel: arm64_neon : 26954 MB/sec Dec 13 01:33:11.338163 kernel: xor: using function: arm64_neon (26954 MB/sec) Dec 13 01:33:11.388715 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:33:11.399684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:33:11.415908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:11.426945 systemd-udevd[459]: Using default interface naming scheme 'v255'. Dec 13 01:33:11.430598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:11.437861 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:33:11.449193 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Dec 13 01:33:11.480370 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:33:11.496883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:33:11.535147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:11.541855 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:33:11.553359 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:33:11.554964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:33:11.557320 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:11.559661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:33:11.567822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:33:11.576614 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:33:11.586724 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:33:11.601539 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:33:11.601684 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:33:11.601715 kernel: GPT:9289727 != 19775487 Dec 13 01:33:11.601725 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:33:11.601734 kernel: GPT:9289727 != 19775487 Dec 13 01:33:11.601751 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:33:11.601761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:11.596978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:33:11.597042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:11.598402 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:11.600236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:33:11.600298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:11.601524 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:11.616043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:11.622108 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (511) Dec 13 01:33:11.622130 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Dec 13 01:33:11.629500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:33:11.635723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:11.640803 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:33:11.647693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:33:11.648881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:33:11.654591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:33:11.666840 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:33:11.668609 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:33:11.674649 disk-uuid[550]: Primary Header is updated. Dec 13 01:33:11.674649 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:33:11.674649 disk-uuid[550]: Secondary Header is updated. Dec 13 01:33:11.678729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:11.695809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:12.695710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:33:12.696461 disk-uuid[551]: The operation has completed successfully. Dec 13 01:33:12.719143 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:33:12.719240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:33:12.739818 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:33:12.742539 sh[573]: Success Dec 13 01:33:12.753703 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:33:12.781703 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:33:12.794132 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:33:12.795800 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:33:12.806233 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:33:12.806284 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:33:12.806297 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:33:12.808142 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:33:12.808158 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:33:12.811881 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:33:12.813225 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:33:12.824846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:33:12.826439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:33:12.834745 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:33:12.834783 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:33:12.834794 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:12.837729 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:12.845040 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:33:12.847044 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:33:12.852585 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:33:12.859857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:33:12.923756 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:33:12.936860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:33:12.945543 ignition[668]: Ignition 2.19.0 Dec 13 01:33:12.945552 ignition[668]: Stage: fetch-offline Dec 13 01:33:12.945587 ignition[668]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:12.945596 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:12.945765 ignition[668]: parsed url from cmdline: "" Dec 13 01:33:12.945768 ignition[668]: no config URL provided Dec 13 01:33:12.945773 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:33:12.945781 ignition[668]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:33:12.945802 ignition[668]: op(1): [started] loading QEMU firmware config module Dec 13 01:33:12.945807 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:33:12.952624 ignition[668]: op(1): [finished] loading QEMU firmware config module Dec 13 01:33:12.952643 ignition[668]: QEMU firmware config was not found. Ignoring... Dec 13 01:33:12.956793 systemd-networkd[767]: lo: Link UP Dec 13 01:33:12.956805 systemd-networkd[767]: lo: Gained carrier Dec 13 01:33:12.957456 systemd-networkd[767]: Enumeration completed Dec 13 01:33:12.957619 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:33:12.957879 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:12.957882 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:33:12.958636 systemd-networkd[767]: eth0: Link UP Dec 13 01:33:12.958639 systemd-networkd[767]: eth0: Gained carrier Dec 13 01:33:12.958645 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:12.959120 systemd[1]: Reached target network.target - Network. Dec 13 01:33:12.981735 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:33:12.987277 ignition[668]: parsing config with SHA512: f1844940a2f2e677e86bf2ef6893f17be3e65a89e9481b00589b585e3a77cf549940eb7a398c0776748c568f0f3aa779d7e8f0d64f61908fa29d340d37fc085b Dec 13 01:33:12.992892 unknown[668]: fetched base config from "system" Dec 13 01:33:12.992907 unknown[668]: fetched user config from "qemu" Dec 13 01:33:12.993309 ignition[668]: fetch-offline: fetch-offline passed Dec 13 01:33:12.995108 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:33:12.993380 ignition[668]: Ignition finished successfully Dec 13 01:33:12.997149 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:33:13.009853 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:33:13.020823 ignition[773]: Ignition 2.19.0 Dec 13 01:33:13.020834 ignition[773]: Stage: kargs Dec 13 01:33:13.020997 ignition[773]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:13.021006 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:13.021843 ignition[773]: kargs: kargs passed Dec 13 01:33:13.021890 ignition[773]: Ignition finished successfully Dec 13 01:33:13.025473 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:33:13.041906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:33:13.051161 ignition[781]: Ignition 2.19.0 Dec 13 01:33:13.051171 ignition[781]: Stage: disks Dec 13 01:33:13.051349 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:13.051358 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:13.052225 ignition[781]: disks: disks passed Dec 13 01:33:13.054035 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:33:13.052267 ignition[781]: Ignition finished successfully Dec 13 01:33:13.055552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:33:13.056756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:33:13.058660 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:33:13.060183 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:33:13.062014 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:33:13.070823 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:33:13.081405 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:33:13.084923 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:33:13.087653 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:33:13.138719 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:33:13.138708 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:33:13.139932 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:33:13.152784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:33:13.154766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:33:13.156258 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:33:13.160747 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Dec 13 01:33:13.156300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:33:13.156322 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:33:13.166570 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:33:13.166604 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:33:13.166614 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:13.167301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:33:13.170089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:33:13.172749 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:13.174379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:33:13.215758 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:33:13.220111 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:33:13.224419 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:33:13.228680 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:33:13.301559 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:33:13.313788 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:33:13.315356 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:33:13.321708 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:33:13.335817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:33:13.338869 ignition[915]: INFO : Ignition 2.19.0 Dec 13 01:33:13.338869 ignition[915]: INFO : Stage: mount Dec 13 01:33:13.341279 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:13.341279 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:13.341279 ignition[915]: INFO : mount: mount passed Dec 13 01:33:13.341279 ignition[915]: INFO : Ignition finished successfully Dec 13 01:33:13.341969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:33:13.348797 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:33:13.805106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:33:13.813897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:33:13.820585 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Dec 13 01:33:13.820621 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:33:13.820633 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:33:13.822205 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:33:13.824709 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:33:13.825602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:33:13.843193 ignition[944]: INFO : Ignition 2.19.0 Dec 13 01:33:13.843193 ignition[944]: INFO : Stage: files Dec 13 01:33:13.844817 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:13.844817 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:13.844817 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:33:13.848311 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:33:13.848311 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:33:13.848311 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:33:13.848311 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:33:13.848311 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:33:13.847734 unknown[944]: wrote ssh authorized keys file for user: core Dec 13 01:33:13.855953 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:33:13.855953 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:33:13.916912 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:33:14.030327 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:33:14.032285 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:33:14.034149 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:33:14.351082 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:33:14.624866 systemd-networkd[767]: eth0: Gained IPv6LL Dec 13 01:33:14.832346 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:33:14.832346 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:33:14.835987 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:33:14.857749 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:33:14.861453 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:33:14.865612 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:33:14.865612 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:33:14.865612 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:33:14.865612 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:33:14.865612 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:33:14.865612 ignition[944]: INFO : files: files passed Dec 13 01:33:14.865612 ignition[944]: INFO : Ignition finished successfully Dec 13 01:33:14.863341 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:33:14.873835 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:33:14.876263 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:33:14.878065 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:33:14.878182 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:33:14.885163 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:33:14.888721 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:14.888721 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:14.891708 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:33:14.891253 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:33:14.893069 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:33:14.902898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:33:14.922090 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:33:14.922219 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:33:14.924486 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:33:14.926320 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:33:14.928094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:33:14.928901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:33:14.944753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:33:14.947215 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:33:14.958822 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:14.960046 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:14.962010 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:33:14.963769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:33:14.963895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:33:14.966343 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:33:14.968451 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:33:14.970068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:33:14.971803 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:33:14.973794 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:33:14.975805 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:33:14.977758 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:33:14.979780 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:33:14.981737 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:33:14.983471 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:33:14.985014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:33:14.985142 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:33:14.987458 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:14.988606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:14.990588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:33:14.994754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:14.996003 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:33:14.996121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:33:14.998964 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:33:14.999086 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:33:15.001133 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:33:15.002609 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:33:15.006750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:15.008029 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:33:15.010165 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:33:15.011730 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:33:15.011828 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:33:15.013396 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:33:15.013508 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:33:15.015044 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:33:15.015156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:33:15.016952 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:33:15.017056 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:33:15.029886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:33:15.031591 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:33:15.032504 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:33:15.032645 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:15.034618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:33:15.034736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:33:15.040130 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:33:15.040222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:33:15.044445 ignition[998]: INFO : Ignition 2.19.0 Dec 13 01:33:15.044445 ignition[998]: INFO : Stage: umount Dec 13 01:33:15.044445 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:33:15.044445 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:33:15.048410 ignition[998]: INFO : umount: umount passed Dec 13 01:33:15.048410 ignition[998]: INFO : Ignition finished successfully Dec 13 01:33:15.045273 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:33:15.047928 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:33:15.048051 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:33:15.049857 systemd[1]: Stopped target network.target - Network. Dec 13 01:33:15.050963 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:33:15.051036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:33:15.052732 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:33:15.052780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:33:15.055905 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:33:15.055954 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:33:15.057625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:33:15.057671 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:33:15.059647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:33:15.061949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:33:15.070004 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:33:15.070106 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:33:15.072239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:33:15.072287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:15.073764 systemd-networkd[767]: eth0: DHCPv6 lease lost Dec 13 01:33:15.074967 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:33:15.075066 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:33:15.077181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:33:15.077235 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:15.082807 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:33:15.084331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:33:15.084388 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:33:15.086415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:33:15.086470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:15.088351 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:33:15.088394 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:15.090553 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:15.102400 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:33:15.102524 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:33:15.104655 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:33:15.104771 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:33:15.106363 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:33:15.106460 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:33:15.111347 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:33:15.111487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:15.113662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:33:15.113821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:15.115609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:33:15.115644 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:15.117483 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:33:15.117532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:33:15.120337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:33:15.120383 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:33:15.123081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:33:15.123127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:33:15.133859 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:33:15.134915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:33:15.134978 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:15.137131 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:33:15.137178 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:15.139142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:33:15.139188 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:15.141373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:33:15.141424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:15.143663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:33:15.143793 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:33:15.146230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:33:15.148141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:33:15.158321 systemd[1]: Switching root. Dec 13 01:33:15.185140 systemd-journald[237]: Journal stopped Dec 13 01:33:15.885460 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:33:15.885513 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:33:15.885525 kernel: SELinux: policy capability open_perms=1 Dec 13 01:33:15.885535 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:33:15.885545 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:33:15.885554 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:33:15.885567 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:33:15.885579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:33:15.885588 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:33:15.885598 kernel: audit: type=1403 audit(1734053595.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:33:15.885608 systemd[1]: Successfully loaded SELinux policy in 34.149ms. Dec 13 01:33:15.885626 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.163ms. Dec 13 01:33:15.885638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:33:15.885649 systemd[1]: Detected virtualization kvm. Dec 13 01:33:15.885661 systemd[1]: Detected architecture arm64. Dec 13 01:33:15.885672 systemd[1]: Detected first boot. Dec 13 01:33:15.885683 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:33:15.885828 zram_generator::config[1042]: No configuration found. Dec 13 01:33:15.885841 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:33:15.885852 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:33:15.885863 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:33:15.885873 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:33:15.885885 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:33:15.885895 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:33:15.885908 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:33:15.885919 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:33:15.885929 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:33:15.885940 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:33:15.885950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:33:15.885961 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:33:15.885971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:33:15.885982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:33:15.885992 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:33:15.886004 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:33:15.886014 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:33:15.886026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:33:15.886037 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:33:15.886048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:33:15.886058 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:33:15.886068 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:33:15.886079 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:33:15.886095 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:33:15.886106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:33:15.886123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:33:15.886134 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:33:15.886145 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:33:15.886155 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:33:15.886166 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:33:15.886176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:33:15.886186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:33:15.886199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:33:15.886209 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:33:15.886223 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:33:15.886233 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:33:15.886244 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:33:15.886255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:33:15.886266 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:33:15.886291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:33:15.886303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:33:15.886315 systemd[1]: Reached target machines.target - Containers. Dec 13 01:33:15.886326 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:33:15.886337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:15.886348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:33:15.886358 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:33:15.886368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:15.886379 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:33:15.886389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:15.886401 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:33:15.886411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:15.886422 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:33:15.886441 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:33:15.886452 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:33:15.886463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:33:15.886473 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:33:15.886483 kernel: fuse: init (API version 7.39) Dec 13 01:33:15.886493 kernel: ACPI: bus type drm_connector registered Dec 13 01:33:15.886505 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:33:15.886518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:33:15.886528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:33:15.886538 kernel: loop: module loaded Dec 13 01:33:15.886548 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:33:15.886559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:33:15.886569 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:33:15.886579 systemd[1]: Stopped verity-setup.service. Dec 13 01:33:15.886612 systemd-journald[1116]: Collecting audit messages is disabled. Dec 13 01:33:15.886635 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:33:15.886646 systemd-journald[1116]: Journal started Dec 13 01:33:15.886667 systemd-journald[1116]: Runtime Journal (/run/log/journal/5fcf1a8ecef849f2ae75f25c4f74c96d) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:33:15.678979 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:33:15.699126 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:33:15.699488 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:33:15.888930 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:33:15.889534 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:33:15.890889 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:33:15.891971 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:33:15.893271 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:33:15.894501 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:33:15.895750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:33:15.897206 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:33:15.897358 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:33:15.898859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:15.899001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:15.900409 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:33:15.900573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:33:15.901983 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:33:15.903562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:15.903734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:15.905179 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:33:15.905331 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:33:15.906788 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:15.906917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:15.908470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:33:15.909877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:33:15.911341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:33:15.923274 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:33:15.935790 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:33:15.937860 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:33:15.938956 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:33:15.938994 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:33:15.940892 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:33:15.943079 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:33:15.945135 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:33:15.946210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:15.947614 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:33:15.949465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:33:15.950749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:33:15.953849 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:33:15.955092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:33:15.958906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:15.959094 systemd-journald[1116]: Time spent on flushing to /var/log/journal/5fcf1a8ecef849f2ae75f25c4f74c96d is 19.892ms for 856 entries. Dec 13 01:33:15.959094 systemd-journald[1116]: System Journal (/var/log/journal/5fcf1a8ecef849f2ae75f25c4f74c96d) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:33:15.987879 systemd-journald[1116]: Received client request to flush runtime journal. Dec 13 01:33:15.987926 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:33:15.963314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:33:15.967861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:33:15.971311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:33:15.972785 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:33:15.974035 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:33:15.976788 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:33:15.978337 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:33:15.983859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:33:15.992773 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:33:15.996913 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:33:15.999937 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:33:16.004277 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:33:16.006123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:16.016252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:33:16.016901 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:33:16.016913 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Dec 13 01:33:16.016926 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Dec 13 01:33:16.019997 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:33:16.022260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:33:16.032564 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:33:16.034176 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:33:16.056758 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:33:16.058897 kernel: loop2: detected capacity change from 0 to 194096 Dec 13 01:33:16.065854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:33:16.080119 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Dec 13 01:33:16.080137 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Dec 13 01:33:16.083744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:33:16.090750 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:33:16.095704 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:33:16.099709 kernel: loop5: detected capacity change from 0 to 194096 Dec 13 01:33:16.103604 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:33:16.104009 (sd-merge)[1181]: Merged extensions into '/usr'. Dec 13 01:33:16.107637 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:33:16.107650 systemd[1]: Reloading... Dec 13 01:33:16.174749 zram_generator::config[1214]: No configuration found. Dec 13 01:33:16.222955 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:33:16.260958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:16.296186 systemd[1]: Reloading finished in 188 ms. Dec 13 01:33:16.321017 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:33:16.322570 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:33:16.336845 systemd[1]: Starting ensure-sysext.service... Dec 13 01:33:16.338635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:33:16.345546 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:33:16.345561 systemd[1]: Reloading... Dec 13 01:33:16.355154 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:33:16.355737 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:33:16.356467 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:33:16.356823 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:33:16.356939 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:33:16.359352 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:33:16.359472 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:33:16.366562 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:33:16.366670 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:33:16.390722 zram_generator::config[1269]: No configuration found. Dec 13 01:33:16.472436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:16.507882 systemd[1]: Reloading finished in 162 ms. Dec 13 01:33:16.522473 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:33:16.535101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:33:16.542471 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:33:16.545130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:33:16.547303 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:33:16.550970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:33:16.560715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:33:16.563143 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:33:16.567416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:16.572033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:16.574834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:16.579027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:16.580131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:16.587072 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:33:16.589141 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:33:16.589453 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Dec 13 01:33:16.591122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:16.591609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:16.593480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:16.593709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:16.595641 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:16.595808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:16.605626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:16.614138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:16.621002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:16.628107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:16.629260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:16.632577 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:33:16.635450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:33:16.637791 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:33:16.639369 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:33:16.642367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:33:16.644074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:16.644204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:16.645786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:16.645920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:16.648182 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:16.648306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:16.650780 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:33:16.662716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1335) Dec 13 01:33:16.670981 systemd[1]: Finished ensure-sysext.service. Dec 13 01:33:16.683793 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1339) Dec 13 01:33:16.686283 augenrules[1360]: No rules Dec 13 01:33:16.687440 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:33:16.687839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:33:16.688712 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1339) Dec 13 01:33:16.694316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:33:16.700021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:33:16.709926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:33:16.719548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:33:16.724406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:33:16.728424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:33:16.732099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:33:16.736898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:33:16.739968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:33:16.752028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:33:16.753320 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:33:16.753989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:33:16.755733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:33:16.757187 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:33:16.757314 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:33:16.759100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:33:16.759222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:33:16.760557 systemd-resolved[1310]: Positive Trust Anchors: Dec 13 01:33:16.760568 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:33:16.760600 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:33:16.761616 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:33:16.762186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:33:16.767683 systemd-resolved[1310]: Defaulting to hostname 'linux'. Dec 13 01:33:16.769993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:33:16.773400 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:33:16.789119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:33:16.790446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:33:16.790507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:33:16.791667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:33:16.795600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:33:16.798358 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:33:16.822889 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:33:16.823263 systemd-networkd[1386]: lo: Link UP Dec 13 01:33:16.823270 systemd-networkd[1386]: lo: Gained carrier Dec 13 01:33:16.824073 systemd-networkd[1386]: Enumeration completed Dec 13 01:33:16.824191 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:33:16.824526 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:16.824533 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:33:16.825162 systemd-networkd[1386]: eth0: Link UP Dec 13 01:33:16.825170 systemd-networkd[1386]: eth0: Gained carrier Dec 13 01:33:16.825184 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:33:16.825531 systemd[1]: Reached target network.target - Network. Dec 13 01:33:16.831897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:33:16.843765 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:33:16.848325 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:33:16.387139 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:33:16.392640 systemd-journald[1116]: Time jumped backwards, rotating. Dec 13 01:33:16.388515 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2024-12-13 01:33:16.387050 UTC. Dec 13 01:33:16.388575 systemd-resolved[1310]: Clock change detected. Flushing caches. Dec 13 01:33:16.388634 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:33:16.391031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:33:16.394195 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:33:16.396995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:33:16.398124 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:33:16.399474 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:33:16.400837 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:33:16.402165 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:33:16.403303 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:33:16.404569 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:33:16.405795 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:33:16.405829 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:33:16.406679 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:33:16.408497 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:33:16.410838 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:33:16.416693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:33:16.418957 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:33:16.420503 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:33:16.421720 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:33:16.422696 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:33:16.423670 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:33:16.423706 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:33:16.424657 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:33:16.426684 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:33:16.427993 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:33:16.428912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:33:16.433728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:33:16.434838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:33:16.435962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:33:16.438710 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:33:16.443995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:33:16.450115 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:33:16.451147 jq[1414]: false Dec 13 01:33:16.455972 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:33:16.460075 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:33:16.460593 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:33:16.461325 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:33:16.462890 extend-filesystems[1415]: Found loop3 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found loop4 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found loop5 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda1 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda2 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda3 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found usr Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda4 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda6 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda7 Dec 13 01:33:16.462890 extend-filesystems[1415]: Found vda9 Dec 13 01:33:16.462890 extend-filesystems[1415]: Checking size of /dev/vda9 Dec 13 01:33:16.476267 dbus-daemon[1413]: [system] SELinux support is enabled Dec 13 01:33:16.465097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:33:16.469982 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:33:16.476614 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:33:16.480586 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:33:16.481812 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:33:16.482219 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:33:16.482364 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:33:16.483527 jq[1429]: true Dec 13 01:33:16.487117 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:33:16.487857 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:33:16.502928 extend-filesystems[1415]: Resized partition /dev/vda9 Dec 13 01:33:16.502796 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:33:16.513986 jq[1437]: true Dec 13 01:33:16.514145 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:33:16.515246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1358) Dec 13 01:33:16.515268 update_engine[1427]: I20241213 01:33:16.503871 1427 main.cc:92] Flatcar Update Engine starting Dec 13 01:33:16.515268 update_engine[1427]: I20241213 01:33:16.509042 1427 update_check_scheduler.cc:74] Next update check in 10m20s Dec 13 01:33:16.502840 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:33:16.506140 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:33:16.507124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:33:16.507144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:33:16.510134 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:33:16.520972 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:33:16.530219 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:33:16.530261 tar[1434]: linux-arm64/helm Dec 13 01:33:16.532429 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:33:16.536569 systemd-logind[1423]: New seat seat0. Dec 13 01:33:16.542187 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:33:16.555956 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:33:16.591195 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:33:16.591195 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:33:16.591195 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:33:16.598281 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Dec 13 01:33:16.593182 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:33:16.593431 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:33:16.602089 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:33:16.603219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:33:16.606858 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:33:16.617730 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:33:16.702101 containerd[1438]: time="2024-12-13T01:33:16.701976598Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:33:16.725667 containerd[1438]: time="2024-12-13T01:33:16.725612878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727167 containerd[1438]: time="2024-12-13T01:33:16.727125598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727199 containerd[1438]: time="2024-12-13T01:33:16.727166598Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:33:16.727199 containerd[1438]: time="2024-12-13T01:33:16.727183918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:33:16.727379 containerd[1438]: time="2024-12-13T01:33:16.727355958Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:33:16.727403 containerd[1438]: time="2024-12-13T01:33:16.727384478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727462 containerd[1438]: time="2024-12-13T01:33:16.727444358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727487 containerd[1438]: time="2024-12-13T01:33:16.727460398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727659 containerd[1438]: time="2024-12-13T01:33:16.727628718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727683 containerd[1438]: time="2024-12-13T01:33:16.727657118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727683 containerd[1438]: time="2024-12-13T01:33:16.727670638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727713 containerd[1438]: time="2024-12-13T01:33:16.727682278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727794 containerd[1438]: time="2024-12-13T01:33:16.727757918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.727991 containerd[1438]: time="2024-12-13T01:33:16.727970438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:33:16.728094 containerd[1438]: time="2024-12-13T01:33:16.728074598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:33:16.728120 containerd[1438]: time="2024-12-13T01:33:16.728093118Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:33:16.728185 containerd[1438]: time="2024-12-13T01:33:16.728167558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:33:16.728233 containerd[1438]: time="2024-12-13T01:33:16.728218438Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:33:16.731754 containerd[1438]: time="2024-12-13T01:33:16.731726758Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:33:16.731813 containerd[1438]: time="2024-12-13T01:33:16.731797118Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:33:16.731836 containerd[1438]: time="2024-12-13T01:33:16.731816958Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:33:16.731836 containerd[1438]: time="2024-12-13T01:33:16.731831518Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:33:16.731904 containerd[1438]: time="2024-12-13T01:33:16.731887838Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:33:16.732066 containerd[1438]: time="2024-12-13T01:33:16.732047558Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:33:16.733538 containerd[1438]: time="2024-12-13T01:33:16.733499838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:33:16.734104 containerd[1438]: time="2024-12-13T01:33:16.734069198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:33:16.734137 containerd[1438]: time="2024-12-13T01:33:16.734108558Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:33:16.734137 containerd[1438]: time="2024-12-13T01:33:16.734122958Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:33:16.734174 containerd[1438]: time="2024-12-13T01:33:16.734136638Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734174 containerd[1438]: time="2024-12-13T01:33:16.734154198Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734205 containerd[1438]: time="2024-12-13T01:33:16.734174878Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734205 containerd[1438]: time="2024-12-13T01:33:16.734188518Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734239 containerd[1438]: time="2024-12-13T01:33:16.734202598Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734239 containerd[1438]: time="2024-12-13T01:33:16.734218558Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734273 containerd[1438]: time="2024-12-13T01:33:16.734231118Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734273 containerd[1438]: time="2024-12-13T01:33:16.734250638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:33:16.734273 containerd[1438]: time="2024-12-13T01:33:16.734270678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734321 containerd[1438]: time="2024-12-13T01:33:16.734284438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734321 containerd[1438]: time="2024-12-13T01:33:16.734297038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734321 containerd[1438]: time="2024-12-13T01:33:16.734315518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734374 containerd[1438]: time="2024-12-13T01:33:16.734332798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734374 containerd[1438]: time="2024-12-13T01:33:16.734346918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734374 containerd[1438]: time="2024-12-13T01:33:16.734359238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734374 containerd[1438]: time="2024-12-13T01:33:16.734372038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734435 containerd[1438]: time="2024-12-13T01:33:16.734390998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734435 containerd[1438]: time="2024-12-13T01:33:16.734406478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734435 containerd[1438]: time="2024-12-13T01:33:16.734417838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734435 containerd[1438]: time="2024-12-13T01:33:16.734429918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734498 containerd[1438]: time="2024-12-13T01:33:16.734442238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734498 containerd[1438]: time="2024-12-13T01:33:16.734457678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:33:16.734498 containerd[1438]: time="2024-12-13T01:33:16.734484758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734555 containerd[1438]: time="2024-12-13T01:33:16.734496398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734555 containerd[1438]: time="2024-12-13T01:33:16.734519558Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:33:16.734671 containerd[1438]: time="2024-12-13T01:33:16.734651518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:33:16.734836 containerd[1438]: time="2024-12-13T01:33:16.734674398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:33:16.734863 containerd[1438]: time="2024-12-13T01:33:16.734836558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:33:16.734863 containerd[1438]: time="2024-12-13T01:33:16.734858358Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:33:16.734897 containerd[1438]: time="2024-12-13T01:33:16.734868598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.734897 containerd[1438]: time="2024-12-13T01:33:16.734881238Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:33:16.734897 containerd[1438]: time="2024-12-13T01:33:16.734891958Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:33:16.734954 containerd[1438]: time="2024-12-13T01:33:16.734905238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:33:16.735325 containerd[1438]: time="2024-12-13T01:33:16.735266038Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:33:16.735431 containerd[1438]: time="2024-12-13T01:33:16.735333718Z" level=info msg="Connect containerd service" Dec 13 01:33:16.737190 containerd[1438]: time="2024-12-13T01:33:16.737098398Z" level=info msg="using legacy CRI server" Dec 13 01:33:16.737190 containerd[1438]: time="2024-12-13T01:33:16.737112958Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:33:16.737277 containerd[1438]: time="2024-12-13T01:33:16.737205438Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:33:16.738063 containerd[1438]: time="2024-12-13T01:33:16.738028998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:33:16.738432 containerd[1438]: time="2024-12-13T01:33:16.738399638Z" level=info msg="Start subscribing containerd event" Dec 13 01:33:16.738470 containerd[1438]: time="2024-12-13T01:33:16.738448038Z" level=info msg="Start recovering state" Dec 13 01:33:16.738757 containerd[1438]: time="2024-12-13T01:33:16.738737798Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:33:16.738827 containerd[1438]: time="2024-12-13T01:33:16.738799638Z" level=info msg="Start event monitor" Dec 13 01:33:16.738827 containerd[1438]: time="2024-12-13T01:33:16.738811558Z" level=info msg="Start snapshots syncer" Dec 13 01:33:16.738827 containerd[1438]: time="2024-12-13T01:33:16.738821198Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:33:16.738827 containerd[1438]: time="2024-12-13T01:33:16.738827998Z" level=info msg="Start streaming server" Dec 13 01:33:16.739117 containerd[1438]: time="2024-12-13T01:33:16.739085638Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:33:16.739203 containerd[1438]: time="2024-12-13T01:33:16.739159598Z" level=info msg="containerd successfully booted in 0.037947s" Dec 13 01:33:16.739794 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:33:16.893743 tar[1434]: linux-arm64/LICENSE Dec 13 01:33:16.893743 tar[1434]: linux-arm64/README.md Dec 13 01:33:16.906518 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:33:16.992536 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:33:17.010898 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:33:17.028098 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:33:17.033085 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:33:17.033851 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:33:17.036340 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:33:17.049445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:33:17.052348 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:33:17.054584 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:33:17.055920 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:33:18.130985 systemd-networkd[1386]: eth0: Gained IPv6LL Dec 13 01:33:18.133519 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:33:18.136317 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:33:18.147086 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:33:18.149383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:18.151434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:33:18.165084 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:33:18.165257 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:33:18.167867 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:33:18.172045 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:33:18.638650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:18.640287 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:33:18.644835 systemd[1]: Startup finished in 561ms (kernel) + 4.611s (initrd) + 3.821s (userspace) = 8.994s. Dec 13 01:33:18.645418 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:33:19.104267 kubelet[1526]: E1213 01:33:19.104209 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:33:19.107168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:33:19.107306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:33:22.656694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:33:22.657880 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:57510.service - OpenSSH per-connection server daemon (10.0.0.1:57510). Dec 13 01:33:22.726035 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 57510 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:22.727829 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:22.744086 systemd-logind[1423]: New session 1 of user core. Dec 13 01:33:22.744889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:33:22.763185 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:33:22.771724 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:33:22.774827 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:33:22.780695 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:33:22.851351 systemd[1544]: Queued start job for default target default.target. Dec 13 01:33:22.861701 systemd[1544]: Created slice app.slice - User Application Slice. Dec 13 01:33:22.861746 systemd[1544]: Reached target paths.target - Paths. Dec 13 01:33:22.861757 systemd[1544]: Reached target timers.target - Timers. Dec 13 01:33:22.862977 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:33:22.872266 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:33:22.872326 systemd[1544]: Reached target sockets.target - Sockets. Dec 13 01:33:22.872338 systemd[1544]: Reached target basic.target - Basic System. Dec 13 01:33:22.872371 systemd[1544]: Reached target default.target - Main User Target. Dec 13 01:33:22.872397 systemd[1544]: Startup finished in 86ms. Dec 13 01:33:22.872647 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:33:22.873839 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:33:22.934627 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:57520.service - OpenSSH per-connection server daemon (10.0.0.1:57520). Dec 13 01:33:22.975574 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 57520 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:22.976790 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:22.980718 systemd-logind[1423]: New session 2 of user core. Dec 13 01:33:22.988899 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:33:23.040957 sshd[1555]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:23.058019 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:57520.service: Deactivated successfully. Dec 13 01:33:23.059376 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:33:23.061977 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:33:23.063615 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:57530.service - OpenSSH per-connection server daemon (10.0.0.1:57530). Dec 13 01:33:23.065847 systemd-logind[1423]: Removed session 2. Dec 13 01:33:23.096879 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 57530 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:23.098116 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:23.102942 systemd-logind[1423]: New session 3 of user core. Dec 13 01:33:23.112918 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:33:23.160562 sshd[1562]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:23.175007 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:57530.service: Deactivated successfully. Dec 13 01:33:23.176272 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:33:23.178778 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:33:23.179842 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:57542.service - OpenSSH per-connection server daemon (10.0.0.1:57542). Dec 13 01:33:23.180438 systemd-logind[1423]: Removed session 3. Dec 13 01:33:23.212198 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 57542 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:23.213731 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:23.217270 systemd-logind[1423]: New session 4 of user core. Dec 13 01:33:23.222913 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:33:23.274062 sshd[1569]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:23.286123 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:57542.service: Deactivated successfully. Dec 13 01:33:23.287587 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:33:23.290037 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:33:23.308035 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:57544.service - OpenSSH per-connection server daemon (10.0.0.1:57544). Dec 13 01:33:23.308745 systemd-logind[1423]: Removed session 4. Dec 13 01:33:23.337099 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 57544 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:33:23.338308 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:23.342253 systemd-logind[1423]: New session 5 of user core. Dec 13 01:33:23.356964 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:33:23.425452 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:33:23.425742 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:33:23.735027 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:33:23.735134 (dockerd)[1596]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:33:24.000516 dockerd[1596]: time="2024-12-13T01:33:24.000387718Z" level=info msg="Starting up" Dec 13 01:33:24.138425 dockerd[1596]: time="2024-12-13T01:33:24.138367838Z" level=info msg="Loading containers: start." Dec 13 01:33:24.223799 kernel: Initializing XFRM netlink socket Dec 13 01:33:24.285892 systemd-networkd[1386]: docker0: Link UP Dec 13 01:33:24.308209 dockerd[1596]: time="2024-12-13T01:33:24.308104638Z" level=info msg="Loading containers: done." Dec 13 01:33:24.319331 dockerd[1596]: time="2024-12-13T01:33:24.318931998Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:33:24.319331 dockerd[1596]: time="2024-12-13T01:33:24.319016838Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:33:24.319331 dockerd[1596]: time="2024-12-13T01:33:24.319107918Z" level=info msg="Daemon has completed initialization" Dec 13 01:33:24.319161 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3579109099-merged.mount: Deactivated successfully. Dec 13 01:33:24.345605 dockerd[1596]: time="2024-12-13T01:33:24.345482718Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:33:24.345682 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:33:25.015933 containerd[1438]: time="2024-12-13T01:33:25.015881478Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:33:25.775759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616173643.mount: Deactivated successfully. Dec 13 01:33:27.521129 containerd[1438]: time="2024-12-13T01:33:27.521081238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.522110 containerd[1438]: time="2024-12-13T01:33:27.521878238Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Dec 13 01:33:27.522722 containerd[1438]: time="2024-12-13T01:33:27.522689238Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.525695 containerd[1438]: time="2024-12-13T01:33:27.525639638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.526865 containerd[1438]: time="2024-12-13T01:33:27.526836358Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.51091064s" Dec 13 01:33:27.526917 containerd[1438]: time="2024-12-13T01:33:27.526874118Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 01:33:27.544544 containerd[1438]: time="2024-12-13T01:33:27.544506718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:33:29.267114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:33:29.272253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:29.362758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:29.366484 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:33:29.413785 kubelet[1824]: E1213 01:33:29.413614 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:33:29.417317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:33:29.417458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:33:29.877159 containerd[1438]: time="2024-12-13T01:33:29.877098078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.877741 containerd[1438]: time="2024-12-13T01:33:29.877695198Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Dec 13 01:33:29.878959 containerd[1438]: time="2024-12-13T01:33:29.878921358Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.881731 containerd[1438]: time="2024-12-13T01:33:29.881692358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.882911 containerd[1438]: time="2024-12-13T01:33:29.882874478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.33832632s" Dec 13 01:33:29.882945 containerd[1438]: time="2024-12-13T01:33:29.882909918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 01:33:29.901435 containerd[1438]: time="2024-12-13T01:33:29.901394038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:33:31.384584 containerd[1438]: time="2024-12-13T01:33:31.384535478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:31.386294 containerd[1438]: time="2024-12-13T01:33:31.386263478Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Dec 13 01:33:31.387202 containerd[1438]: time="2024-12-13T01:33:31.387169758Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:31.390505 containerd[1438]: time="2024-12-13T01:33:31.390429998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:31.392508 containerd[1438]: time="2024-12-13T01:33:31.391941118Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.49050692s" Dec 13 01:33:31.392508 containerd[1438]: time="2024-12-13T01:33:31.391977038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 01:33:31.409941 containerd[1438]: time="2024-12-13T01:33:31.409906678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:33:32.386256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999693589.mount: Deactivated successfully. Dec 13 01:33:32.578991 containerd[1438]: time="2024-12-13T01:33:32.578942838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.579918 containerd[1438]: time="2024-12-13T01:33:32.579743478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Dec 13 01:33:32.580635 containerd[1438]: time="2024-12-13T01:33:32.580608278Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.583847 containerd[1438]: time="2024-12-13T01:33:32.583149118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.583922 containerd[1438]: time="2024-12-13T01:33:32.583846078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.17377464s" Dec 13 01:33:32.583922 containerd[1438]: time="2024-12-13T01:33:32.583881198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 01:33:32.601427 containerd[1438]: time="2024-12-13T01:33:32.601400078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:33:33.095284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046782220.mount: Deactivated successfully. Dec 13 01:33:33.852642 containerd[1438]: time="2024-12-13T01:33:33.852467558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.853588 containerd[1438]: time="2024-12-13T01:33:33.853526558Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:33:33.854205 containerd[1438]: time="2024-12-13T01:33:33.854168318Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.857110 containerd[1438]: time="2024-12-13T01:33:33.857070558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:33.858269 containerd[1438]: time="2024-12-13T01:33:33.858231078Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.25679748s" Dec 13 01:33:33.858269 containerd[1438]: time="2024-12-13T01:33:33.858266718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:33:33.876833 containerd[1438]: time="2024-12-13T01:33:33.876790758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:33:34.300176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634337687.mount: Deactivated successfully. Dec 13 01:33:34.303806 containerd[1438]: time="2024-12-13T01:33:34.303679358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:34.304423 containerd[1438]: time="2024-12-13T01:33:34.304394078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:33:34.305053 containerd[1438]: time="2024-12-13T01:33:34.304989918Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:34.307675 containerd[1438]: time="2024-12-13T01:33:34.307631198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:34.309439 containerd[1438]: time="2024-12-13T01:33:34.309242398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 432.41588ms" Dec 13 01:33:34.309439 containerd[1438]: time="2024-12-13T01:33:34.309272118Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:33:34.327009 containerd[1438]: time="2024-12-13T01:33:34.326820998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:33:34.852824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722457785.mount: Deactivated successfully. Dec 13 01:33:37.708564 containerd[1438]: time="2024-12-13T01:33:37.708382798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:37.709479 containerd[1438]: time="2024-12-13T01:33:37.709405438Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Dec 13 01:33:37.710078 containerd[1438]: time="2024-12-13T01:33:37.710020918Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:37.715866 containerd[1438]: time="2024-12-13T01:33:37.715810998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:37.717079 containerd[1438]: time="2024-12-13T01:33:37.717044678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.39018724s" Dec 13 01:33:37.717079 containerd[1438]: time="2024-12-13T01:33:37.717078198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 01:33:39.517129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:33:39.526972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:39.616568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:39.620345 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:33:39.679431 kubelet[2052]: E1213 01:33:39.679383 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:33:39.682099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:33:39.682237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:33:42.879689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:42.888979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:42.907163 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-5.scope)... Dec 13 01:33:42.907311 systemd[1]: Reloading... Dec 13 01:33:42.984811 zram_generator::config[2108]: No configuration found. Dec 13 01:33:43.199650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:43.251922 systemd[1]: Reloading finished in 344 ms. Dec 13 01:33:43.296823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:33:43.296885 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:33:43.297065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:43.299055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:43.392511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:43.396608 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:33:43.432105 kubelet[2154]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:43.432105 kubelet[2154]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:33:43.432105 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:43.434444 kubelet[2154]: I1213 01:33:43.434391 2154 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:33:44.051360 kubelet[2154]: I1213 01:33:44.051315 2154 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:33:44.051360 kubelet[2154]: I1213 01:33:44.051349 2154 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:33:44.051580 kubelet[2154]: I1213 01:33:44.051553 2154 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:33:44.081019 kubelet[2154]: I1213 01:33:44.080916 2154 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:44.081019 kubelet[2154]: E1213 01:33:44.080931 2154 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.089878 kubelet[2154]: I1213 01:33:44.089851 2154 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:33:44.090973 kubelet[2154]: I1213 01:33:44.090926 2154 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:33:44.091134 kubelet[2154]: I1213 01:33:44.090969 2154 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:33:44.091214 kubelet[2154]: I1213 01:33:44.091200 2154 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:33:44.091214 kubelet[2154]: I1213 01:33:44.091209 2154 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:33:44.091485 kubelet[2154]: I1213 01:33:44.091456 2154 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:44.092868 kubelet[2154]: I1213 01:33:44.092845 2154 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:33:44.092908 kubelet[2154]: I1213 01:33:44.092869 2154 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:33:44.093192 kubelet[2154]: I1213 01:33:44.093180 2154 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:33:44.093847 kubelet[2154]: I1213 01:33:44.093329 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:33:44.093847 kubelet[2154]: W1213 01:33:44.093493 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.093847 kubelet[2154]: E1213 01:33:44.093543 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.093847 kubelet[2154]: W1213 01:33:44.093756 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.093847 kubelet[2154]: E1213 01:33:44.093815 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.094183 kubelet[2154]: I1213 01:33:44.094162 2154 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:33:44.094540 kubelet[2154]: I1213 01:33:44.094527 2154 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:33:44.094667 kubelet[2154]: W1213 01:33:44.094628 2154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:33:44.095754 kubelet[2154]: I1213 01:33:44.095456 2154 server.go:1264] "Started kubelet" Dec 13 01:33:44.096318 kubelet[2154]: I1213 01:33:44.096092 2154 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:33:44.096318 kubelet[2154]: I1213 01:33:44.096132 2154 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:33:44.096500 kubelet[2154]: I1213 01:33:44.096483 2154 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:33:44.097141 kubelet[2154]: I1213 01:33:44.097119 2154 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:33:44.098306 kubelet[2154]: E1213 01:33:44.098064 2154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181098956dafd1ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:33:44.095429038 +0000 UTC m=+0.695780441,LastTimestamp:2024-12-13 01:33:44.095429038 +0000 UTC m=+0.695780441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:33:44.099389 kubelet[2154]: I1213 01:33:44.099355 2154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:33:44.099968 kubelet[2154]: I1213 01:33:44.099793 2154 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:33:44.099968 kubelet[2154]: I1213 01:33:44.099869 2154 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:33:44.100052 kubelet[2154]: I1213 01:33:44.100041 2154 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:33:44.100902 kubelet[2154]: W1213 01:33:44.100855 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.100902 kubelet[2154]: E1213 01:33:44.100900 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.102240 kubelet[2154]: E1213 01:33:44.101933 2154 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:33:44.102240 kubelet[2154]: E1213 01:33:44.102055 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Dec 13 01:33:44.102781 kubelet[2154]: I1213 01:33:44.102540 2154 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:33:44.102781 kubelet[2154]: I1213 01:33:44.102626 2154 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:33:44.103554 kubelet[2154]: I1213 01:33:44.103536 2154 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:33:44.113919 kubelet[2154]: I1213 01:33:44.113844 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:33:44.115580 kubelet[2154]: I1213 01:33:44.114829 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:33:44.115580 kubelet[2154]: I1213 01:33:44.114975 2154 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:33:44.115580 kubelet[2154]: I1213 01:33:44.114992 2154 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:33:44.115580 kubelet[2154]: E1213 01:33:44.115032 2154 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:33:44.115580 kubelet[2154]: W1213 01:33:44.115512 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.115580 kubelet[2154]: E1213 01:33:44.115551 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.118713 kubelet[2154]: I1213 01:33:44.118691 2154 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:33:44.118713 kubelet[2154]: I1213 01:33:44.118709 2154 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:33:44.118872 kubelet[2154]: I1213 01:33:44.118725 2154 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:44.184705 kubelet[2154]: I1213 01:33:44.184673 2154 policy_none.go:49] "None policy: Start" Dec 13 01:33:44.185505 kubelet[2154]: I1213 01:33:44.185440 2154 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:33:44.185505 kubelet[2154]: I1213 01:33:44.185475 2154 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:33:44.190814 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:33:44.201439 kubelet[2154]: I1213 01:33:44.201404 2154 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:44.201740 kubelet[2154]: E1213 01:33:44.201696 2154 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Dec 13 01:33:44.204394 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:33:44.206982 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:33:44.215630 kubelet[2154]: E1213 01:33:44.215596 2154 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:33:44.217513 kubelet[2154]: I1213 01:33:44.217403 2154 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:33:44.217784 kubelet[2154]: I1213 01:33:44.217587 2154 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:33:44.217784 kubelet[2154]: I1213 01:33:44.217688 2154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:33:44.219214 kubelet[2154]: E1213 01:33:44.219190 2154 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:33:44.303164 kubelet[2154]: E1213 01:33:44.303059 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Dec 13 01:33:44.403318 kubelet[2154]: I1213 01:33:44.403297 2154 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:44.403655 kubelet[2154]: E1213 01:33:44.403617 2154 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Dec 13 01:33:44.415738 kubelet[2154]: I1213 01:33:44.415677 2154 topology_manager.go:215] "Topology Admit Handler" podUID="3ec28259f8993dc17fd0bcab338ee7b1" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:33:44.416651 kubelet[2154]: I1213 01:33:44.416625 2154 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:33:44.417449 kubelet[2154]: I1213 01:33:44.417427 2154 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:33:44.423081 systemd[1]: Created slice kubepods-burstable-pod3ec28259f8993dc17fd0bcab338ee7b1.slice - libcontainer container kubepods-burstable-pod3ec28259f8993dc17fd0bcab338ee7b1.slice. Dec 13 01:33:44.449541 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:33:44.461950 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:33:44.503555 kubelet[2154]: I1213 01:33:44.503486 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:44.503555 kubelet[2154]: I1213 01:33:44.503516 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:44.503555 kubelet[2154]: I1213 01:33:44.503541 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:44.503555 kubelet[2154]: I1213 01:33:44.503560 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:44.504046 kubelet[2154]: I1213 01:33:44.503577 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:44.504046 kubelet[2154]: I1213 01:33:44.503619 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:33:44.504046 kubelet[2154]: I1213 01:33:44.503680 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:44.504046 kubelet[2154]: I1213 01:33:44.503730 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:44.504046 kubelet[2154]: I1213 01:33:44.503756 2154 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:44.704155 kubelet[2154]: E1213 01:33:44.704033 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Dec 13 01:33:44.746452 kubelet[2154]: E1213 01:33:44.746402 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:44.747075 containerd[1438]: time="2024-12-13T01:33:44.746908878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ec28259f8993dc17fd0bcab338ee7b1,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:44.760502 kubelet[2154]: E1213 01:33:44.760482 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:44.760864 containerd[1438]: time="2024-12-13T01:33:44.760826558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:44.764128 kubelet[2154]: E1213 01:33:44.764102 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:44.764447 containerd[1438]: time="2024-12-13T01:33:44.764391318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:44.804945 kubelet[2154]: I1213 01:33:44.804888 2154 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:44.805221 kubelet[2154]: E1213 01:33:44.805188 2154 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Dec 13 01:33:44.917200 kubelet[2154]: W1213 01:33:44.917113 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.917200 kubelet[2154]: E1213 01:33:44.917179 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.994560 kubelet[2154]: W1213 01:33:44.994444 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:44.994560 kubelet[2154]: E1213 01:33:44.994498 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:45.196854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522058154.mount: Deactivated successfully. Dec 13 01:33:45.202497 containerd[1438]: time="2024-12-13T01:33:45.202460198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:45.203715 containerd[1438]: time="2024-12-13T01:33:45.203679798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:33:45.204545 containerd[1438]: time="2024-12-13T01:33:45.204256998Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:45.205435 containerd[1438]: time="2024-12-13T01:33:45.205402278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:33:45.206308 containerd[1438]: time="2024-12-13T01:33:45.206016638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:45.207317 containerd[1438]: time="2024-12-13T01:33:45.207287438Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:45.207708 containerd[1438]: time="2024-12-13T01:33:45.207685958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:33:45.210737 containerd[1438]: time="2024-12-13T01:33:45.210679318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:45.212194 containerd[1438]: time="2024-12-13T01:33:45.212168838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 447.70476ms" Dec 13 01:33:45.212844 containerd[1438]: time="2024-12-13T01:33:45.212757398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 451.87644ms" Dec 13 01:33:45.215473 containerd[1438]: time="2024-12-13T01:33:45.215433318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.44728ms" Dec 13 01:33:45.348621 containerd[1438]: time="2024-12-13T01:33:45.348424118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:45.348621 containerd[1438]: time="2024-12-13T01:33:45.348488678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:45.348621 containerd[1438]: time="2024-12-13T01:33:45.348511438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.348621 containerd[1438]: time="2024-12-13T01:33:45.348610078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.349858 containerd[1438]: time="2024-12-13T01:33:45.349600598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:45.349858 containerd[1438]: time="2024-12-13T01:33:45.349643398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:45.349858 containerd[1438]: time="2024-12-13T01:33:45.349654198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.349858 containerd[1438]: time="2024-12-13T01:33:45.349723958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.349858 containerd[1438]: time="2024-12-13T01:33:45.349648758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:45.350088 containerd[1438]: time="2024-12-13T01:33:45.350024238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:45.350088 containerd[1438]: time="2024-12-13T01:33:45.350048678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.350309 containerd[1438]: time="2024-12-13T01:33:45.350207878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.371935 systemd[1]: Started cri-containerd-1779a3d26aff2bcccd8843103f65b705afc8f5127192ae9fa6184bdaa2e58d2e.scope - libcontainer container 1779a3d26aff2bcccd8843103f65b705afc8f5127192ae9fa6184bdaa2e58d2e. Dec 13 01:33:45.373479 systemd[1]: Started cri-containerd-50c7a8149b81f362f2a1bf3acb2b86ead17028f7e3bd0c4fa449c34fb9a9f7de.scope - libcontainer container 50c7a8149b81f362f2a1bf3acb2b86ead17028f7e3bd0c4fa449c34fb9a9f7de. Dec 13 01:33:45.374711 systemd[1]: Started cri-containerd-55de22ae9a7fe44efd63683370f051ba11cfca35afb54c41640d22e855347461.scope - libcontainer container 55de22ae9a7fe44efd63683370f051ba11cfca35afb54c41640d22e855347461. Dec 13 01:33:45.383141 kubelet[2154]: W1213 01:33:45.382996 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:45.383352 kubelet[2154]: E1213 01:33:45.383316 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:45.407117 containerd[1438]: time="2024-12-13T01:33:45.406294398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1779a3d26aff2bcccd8843103f65b705afc8f5127192ae9fa6184bdaa2e58d2e\"" Dec 13 01:33:45.407848 kubelet[2154]: E1213 01:33:45.407817 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:45.408686 containerd[1438]: time="2024-12-13T01:33:45.408472438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ec28259f8993dc17fd0bcab338ee7b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c7a8149b81f362f2a1bf3acb2b86ead17028f7e3bd0c4fa449c34fb9a9f7de\"" Dec 13 01:33:45.409203 kubelet[2154]: E1213 01:33:45.409186 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:45.410452 containerd[1438]: time="2024-12-13T01:33:45.410423038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"55de22ae9a7fe44efd63683370f051ba11cfca35afb54c41640d22e855347461\"" Dec 13 01:33:45.410958 containerd[1438]: time="2024-12-13T01:33:45.410927438Z" level=info msg="CreateContainer within sandbox \"1779a3d26aff2bcccd8843103f65b705afc8f5127192ae9fa6184bdaa2e58d2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:33:45.411201 kubelet[2154]: E1213 01:33:45.411181 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:45.412721 containerd[1438]: time="2024-12-13T01:33:45.412692998Z" level=info msg="CreateContainer within sandbox \"55de22ae9a7fe44efd63683370f051ba11cfca35afb54c41640d22e855347461\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:33:45.413259 containerd[1438]: time="2024-12-13T01:33:45.413159198Z" level=info msg="CreateContainer within sandbox \"50c7a8149b81f362f2a1bf3acb2b86ead17028f7e3bd0c4fa449c34fb9a9f7de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:33:45.427562 containerd[1438]: time="2024-12-13T01:33:45.427521358Z" level=info msg="CreateContainer within sandbox \"55de22ae9a7fe44efd63683370f051ba11cfca35afb54c41640d22e855347461\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"770639ae217c8165fa97cfb91657cb1990868f09c2a17199b99a8606c79296a3\"" Dec 13 01:33:45.428610 containerd[1438]: time="2024-12-13T01:33:45.428585798Z" level=info msg="StartContainer for \"770639ae217c8165fa97cfb91657cb1990868f09c2a17199b99a8606c79296a3\"" Dec 13 01:33:45.430948 containerd[1438]: time="2024-12-13T01:33:45.430910078Z" level=info msg="CreateContainer within sandbox \"1779a3d26aff2bcccd8843103f65b705afc8f5127192ae9fa6184bdaa2e58d2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7cb05be0092d0a00944608c3264f8c47f3c4457179c611663356ef5d914ec81\"" Dec 13 01:33:45.431320 containerd[1438]: time="2024-12-13T01:33:45.431281358Z" level=info msg="StartContainer for \"d7cb05be0092d0a00944608c3264f8c47f3c4457179c611663356ef5d914ec81\"" Dec 13 01:33:45.433994 containerd[1438]: time="2024-12-13T01:33:45.433420678Z" level=info msg="CreateContainer within sandbox \"50c7a8149b81f362f2a1bf3acb2b86ead17028f7e3bd0c4fa449c34fb9a9f7de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36461db829166c8dd2667f1a8409929bab153deeee773cd649179ceeeca9ab68\"" Dec 13 01:33:45.433994 containerd[1438]: time="2024-12-13T01:33:45.433895358Z" level=info msg="StartContainer for \"36461db829166c8dd2667f1a8409929bab153deeee773cd649179ceeeca9ab68\"" Dec 13 01:33:45.452922 systemd[1]: Started cri-containerd-770639ae217c8165fa97cfb91657cb1990868f09c2a17199b99a8606c79296a3.scope - libcontainer container 770639ae217c8165fa97cfb91657cb1990868f09c2a17199b99a8606c79296a3. Dec 13 01:33:45.456323 systemd[1]: Started cri-containerd-36461db829166c8dd2667f1a8409929bab153deeee773cd649179ceeeca9ab68.scope - libcontainer container 36461db829166c8dd2667f1a8409929bab153deeee773cd649179ceeeca9ab68. Dec 13 01:33:45.457549 systemd[1]: Started cri-containerd-d7cb05be0092d0a00944608c3264f8c47f3c4457179c611663356ef5d914ec81.scope - libcontainer container d7cb05be0092d0a00944608c3264f8c47f3c4457179c611663356ef5d914ec81. Dec 13 01:33:45.506670 kubelet[2154]: E1213 01:33:45.505122 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Dec 13 01:33:45.512099 containerd[1438]: time="2024-12-13T01:33:45.507131078Z" level=info msg="StartContainer for \"770639ae217c8165fa97cfb91657cb1990868f09c2a17199b99a8606c79296a3\" returns successfully" Dec 13 01:33:45.512099 containerd[1438]: time="2024-12-13T01:33:45.507264158Z" level=info msg="StartContainer for \"36461db829166c8dd2667f1a8409929bab153deeee773cd649179ceeeca9ab68\" returns successfully" Dec 13 01:33:45.512099 containerd[1438]: time="2024-12-13T01:33:45.507283958Z" level=info msg="StartContainer for \"d7cb05be0092d0a00944608c3264f8c47f3c4457179c611663356ef5d914ec81\" returns successfully" Dec 13 01:33:45.610629 kubelet[2154]: I1213 01:33:45.607097 2154 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:45.610629 kubelet[2154]: E1213 01:33:45.607418 2154 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Dec 13 01:33:45.647744 kubelet[2154]: W1213 01:33:45.647222 2154 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:45.647744 kubelet[2154]: E1213 01:33:45.647296 2154 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Dec 13 01:33:46.125660 kubelet[2154]: E1213 01:33:46.125631 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:46.126678 kubelet[2154]: E1213 01:33:46.126649 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:46.128948 kubelet[2154]: E1213 01:33:46.128755 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:47.131104 kubelet[2154]: E1213 01:33:47.130365 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:47.131104 kubelet[2154]: E1213 01:33:47.130978 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:47.209055 kubelet[2154]: I1213 01:33:47.208761 2154 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:47.327737 kubelet[2154]: E1213 01:33:47.327706 2154 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:33:47.522033 kubelet[2154]: I1213 01:33:47.520722 2154 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:33:47.529387 kubelet[2154]: E1213 01:33:47.529306 2154 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:47.629444 kubelet[2154]: E1213 01:33:47.629407 2154 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:33:48.096070 kubelet[2154]: I1213 01:33:48.096026 2154 apiserver.go:52] "Watching apiserver" Dec 13 01:33:48.100081 kubelet[2154]: I1213 01:33:48.100061 2154 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:49.365331 systemd[1]: Reloading requested from client PID 2433 ('systemctl') (unit session-5.scope)... Dec 13 01:33:49.365349 systemd[1]: Reloading... Dec 13 01:33:49.426892 zram_generator::config[2475]: No configuration found. Dec 13 01:33:49.526868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:49.590311 systemd[1]: Reloading finished in 224 ms. Dec 13 01:33:49.623063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:49.628619 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:33:49.628862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:49.628912 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 115.7M memory peak, 0B memory swap peak. Dec 13 01:33:49.636034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:49.725867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:49.729679 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:33:49.767257 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:49.767257 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:33:49.767257 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:49.767584 kubelet[2514]: I1213 01:33:49.767286 2514 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:33:49.771295 kubelet[2514]: I1213 01:33:49.771255 2514 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:33:49.771295 kubelet[2514]: I1213 01:33:49.771278 2514 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:33:49.772897 kubelet[2514]: I1213 01:33:49.771445 2514 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:33:49.774860 kubelet[2514]: I1213 01:33:49.774835 2514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:33:49.775956 kubelet[2514]: I1213 01:33:49.775933 2514 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:49.780347 kubelet[2514]: I1213 01:33:49.780319 2514 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:33:49.780525 kubelet[2514]: I1213 01:33:49.780503 2514 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:33:49.780671 kubelet[2514]: I1213 01:33:49.780527 2514 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:33:49.780739 kubelet[2514]: I1213 01:33:49.780678 2514 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:33:49.780739 kubelet[2514]: I1213 01:33:49.780687 2514 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:33:49.780739 kubelet[2514]: I1213 01:33:49.780716 2514 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:49.780830 kubelet[2514]: I1213 01:33:49.780819 2514 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:33:49.780862 kubelet[2514]: I1213 01:33:49.780833 2514 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:33:49.780862 kubelet[2514]: I1213 01:33:49.780858 2514 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:33:49.780902 kubelet[2514]: I1213 01:33:49.780874 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.782009 2514 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.782290 2514 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.782873 2514 server.go:1264] "Started kubelet" Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.785524 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.785734 2514 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:33:49.786464 kubelet[2514]: E1213 01:33:49.786285 2514 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:33:49.786464 kubelet[2514]: I1213 01:33:49.786339 2514 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:33:49.788826 kubelet[2514]: I1213 01:33:49.787215 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:33:49.788826 kubelet[2514]: I1213 01:33:49.787829 2514 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:33:49.793041 kubelet[2514]: I1213 01:33:49.793008 2514 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:33:49.793422 kubelet[2514]: I1213 01:33:49.793394 2514 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:33:49.793561 kubelet[2514]: I1213 01:33:49.793542 2514 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:33:49.799564 kubelet[2514]: I1213 01:33:49.799534 2514 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:33:49.799649 kubelet[2514]: I1213 01:33:49.799627 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:33:49.806109 kubelet[2514]: I1213 01:33:49.806075 2514 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:33:49.806890 kubelet[2514]: I1213 01:33:49.806851 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:33:49.810582 kubelet[2514]: I1213 01:33:49.810546 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:33:49.810582 kubelet[2514]: I1213 01:33:49.810585 2514 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:33:49.810663 kubelet[2514]: I1213 01:33:49.810603 2514 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:33:49.810663 kubelet[2514]: E1213 01:33:49.810647 2514 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:33:49.833740 kubelet[2514]: I1213 01:33:49.833713 2514 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:33:49.833740 kubelet[2514]: I1213 01:33:49.833733 2514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:33:49.833863 kubelet[2514]: I1213 01:33:49.833752 2514 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:49.833923 kubelet[2514]: I1213 01:33:49.833905 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:33:49.833950 kubelet[2514]: I1213 01:33:49.833921 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:33:49.833950 kubelet[2514]: I1213 01:33:49.833939 2514 policy_none.go:49] "None policy: Start" Dec 13 01:33:49.834478 kubelet[2514]: I1213 01:33:49.834443 2514 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:33:49.834478 kubelet[2514]: I1213 01:33:49.834471 2514 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:33:49.834613 kubelet[2514]: I1213 01:33:49.834599 2514 state_mem.go:75] "Updated machine memory state" Dec 13 01:33:49.840527 kubelet[2514]: I1213 01:33:49.840009 2514 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:33:49.840527 kubelet[2514]: I1213 01:33:49.840152 2514 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:33:49.840527 kubelet[2514]: I1213 01:33:49.840236 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:33:49.896857 kubelet[2514]: I1213 01:33:49.896755 2514 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:33:49.902715 kubelet[2514]: I1213 01:33:49.902570 2514 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:33:49.902715 kubelet[2514]: I1213 01:33:49.902642 2514 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:33:49.910815 kubelet[2514]: I1213 01:33:49.910783 2514 topology_manager.go:215] "Topology Admit Handler" podUID="3ec28259f8993dc17fd0bcab338ee7b1" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:33:49.910894 kubelet[2514]: I1213 01:33:49.910871 2514 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:33:49.910917 kubelet[2514]: I1213 01:33:49.910904 2514 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:33:50.094631 kubelet[2514]: I1213 01:33:50.094532 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:50.094631 kubelet[2514]: I1213 01:33:50.094568 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:50.094631 kubelet[2514]: I1213 01:33:50.094590 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:50.094631 kubelet[2514]: I1213 01:33:50.094610 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:50.094631 kubelet[2514]: I1213 01:33:50.094628 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:50.094955 kubelet[2514]: I1213 01:33:50.094645 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:50.094955 kubelet[2514]: I1213 01:33:50.094659 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:33:50.095149 kubelet[2514]: I1213 01:33:50.095067 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:33:50.095149 kubelet[2514]: I1213 01:33:50.095108 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ec28259f8993dc17fd0bcab338ee7b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ec28259f8993dc17fd0bcab338ee7b1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:50.230601 kubelet[2514]: E1213 01:33:50.230195 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.230601 kubelet[2514]: E1213 01:33:50.230368 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.230601 kubelet[2514]: E1213 01:33:50.230408 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.782132 kubelet[2514]: I1213 01:33:50.781756 2514 apiserver.go:52] "Watching apiserver" Dec 13 01:33:50.793727 kubelet[2514]: I1213 01:33:50.793677 2514 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:50.823017 kubelet[2514]: E1213 01:33:50.822667 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.823017 kubelet[2514]: E1213 01:33:50.822951 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.827985 kubelet[2514]: E1213 01:33:50.827928 2514 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:33:50.828708 kubelet[2514]: E1213 01:33:50.828360 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:50.850464 kubelet[2514]: I1213 01:33:50.850406 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.850380028 podStartE2EDuration="1.850380028s" podCreationTimestamp="2024-12-13 01:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:50.843487719 +0000 UTC m=+1.110914724" watchObservedRunningTime="2024-12-13 01:33:50.850380028 +0000 UTC m=+1.117807073" Dec 13 01:33:50.850805 kubelet[2514]: I1213 01:33:50.850713 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.850705149 podStartE2EDuration="1.850705149s" podCreationTimestamp="2024-12-13 01:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:50.850209627 +0000 UTC m=+1.117636632" watchObservedRunningTime="2024-12-13 01:33:50.850705149 +0000 UTC m=+1.118132154" Dec 13 01:33:50.872411 kubelet[2514]: I1213 01:33:50.872344 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.872311799 podStartE2EDuration="1.872311799s" podCreationTimestamp="2024-12-13 01:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:50.858249421 +0000 UTC m=+1.125676426" watchObservedRunningTime="2024-12-13 01:33:50.872311799 +0000 UTC m=+1.139738764" Dec 13 01:33:51.245711 sudo[1579]: pam_unix(sudo:session): session closed for user root Dec 13 01:33:51.248098 sshd[1576]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:51.251604 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:57544.service: Deactivated successfully. Dec 13 01:33:51.253221 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:33:51.254103 systemd[1]: session-5.scope: Consumed 6.642s CPU time, 190.9M memory peak, 0B memory swap peak. Dec 13 01:33:51.254531 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:33:51.255325 systemd-logind[1423]: Removed session 5. Dec 13 01:33:51.824306 kubelet[2514]: E1213 01:33:51.824275 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:51.825082 kubelet[2514]: E1213 01:33:51.824713 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:53.199427 kubelet[2514]: E1213 01:33:53.196458 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:33:53.464930 kubelet[2514]: E1213 01:33:53.464673 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:01.315260 kubelet[2514]: E1213 01:34:01.314747 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:01.659357 update_engine[1427]: I20241213 01:34:01.659219 1427 update_attempter.cc:509] Updating boot flags... Dec 13 01:34:01.676821 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2593) Dec 13 01:34:01.706804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2592) Dec 13 01:34:03.204189 kubelet[2514]: E1213 01:34:03.204119 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:03.475552 kubelet[2514]: E1213 01:34:03.475446 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:05.419286 kubelet[2514]: I1213 01:34:05.419111 2514 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:34:05.419625 containerd[1438]: time="2024-12-13T01:34:05.419522013Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:34:05.419819 kubelet[2514]: I1213 01:34:05.419680 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:34:05.938465 kubelet[2514]: I1213 01:34:05.938378 2514 topology_manager.go:215] "Topology Admit Handler" podUID="0fe67492-40d3-443f-8f02-19be05eb5f2a" podNamespace="kube-system" podName="kube-proxy-pw492" Dec 13 01:34:05.941886 kubelet[2514]: I1213 01:34:05.941830 2514 topology_manager.go:215] "Topology Admit Handler" podUID="335a6109-27a0-4236-92c9-4c3177b88f96" podNamespace="kube-flannel" podName="kube-flannel-ds-7v4ks" Dec 13 01:34:05.954472 systemd[1]: Created slice kubepods-besteffort-pod0fe67492_40d3_443f_8f02_19be05eb5f2a.slice - libcontainer container kubepods-besteffort-pod0fe67492_40d3_443f_8f02_19be05eb5f2a.slice. Dec 13 01:34:05.964863 systemd[1]: Created slice kubepods-burstable-pod335a6109_27a0_4236_92c9_4c3177b88f96.slice - libcontainer container kubepods-burstable-pod335a6109_27a0_4236_92c9_4c3177b88f96.slice. Dec 13 01:34:05.994421 kubelet[2514]: I1213 01:34:05.994372 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/335a6109-27a0-4236-92c9-4c3177b88f96-run\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:05.994589 kubelet[2514]: I1213 01:34:05.994444 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/335a6109-27a0-4236-92c9-4c3177b88f96-cni-plugin\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:05.994589 kubelet[2514]: I1213 01:34:05.994469 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltzd\" (UniqueName: \"kubernetes.io/projected/335a6109-27a0-4236-92c9-4c3177b88f96-kube-api-access-pltzd\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:05.994589 kubelet[2514]: I1213 01:34:05.994485 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fe67492-40d3-443f-8f02-19be05eb5f2a-kube-proxy\") pod \"kube-proxy-pw492\" (UID: \"0fe67492-40d3-443f-8f02-19be05eb5f2a\") " pod="kube-system/kube-proxy-pw492" Dec 13 01:34:05.994589 kubelet[2514]: I1213 01:34:05.994521 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe67492-40d3-443f-8f02-19be05eb5f2a-xtables-lock\") pod \"kube-proxy-pw492\" (UID: \"0fe67492-40d3-443f-8f02-19be05eb5f2a\") " pod="kube-system/kube-proxy-pw492" Dec 13 01:34:05.994589 kubelet[2514]: I1213 01:34:05.994538 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe67492-40d3-443f-8f02-19be05eb5f2a-lib-modules\") pod \"kube-proxy-pw492\" (UID: \"0fe67492-40d3-443f-8f02-19be05eb5f2a\") " pod="kube-system/kube-proxy-pw492" Dec 13 01:34:05.994740 kubelet[2514]: I1213 01:34:05.994603 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/335a6109-27a0-4236-92c9-4c3177b88f96-xtables-lock\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:05.994740 kubelet[2514]: I1213 01:34:05.994628 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/335a6109-27a0-4236-92c9-4c3177b88f96-cni\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:05.994740 kubelet[2514]: I1213 01:34:05.994644 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/335a6109-27a0-4236-92c9-4c3177b88f96-flannel-cfg\") pod \"kube-flannel-ds-7v4ks\" (UID: \"335a6109-27a0-4236-92c9-4c3177b88f96\") " pod="kube-flannel/kube-flannel-ds-7v4ks" Dec 13 01:34:06.095613 kubelet[2514]: I1213 01:34:06.095566 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sfw8\" (UniqueName: \"kubernetes.io/projected/0fe67492-40d3-443f-8f02-19be05eb5f2a-kube-api-access-6sfw8\") pod \"kube-proxy-pw492\" (UID: \"0fe67492-40d3-443f-8f02-19be05eb5f2a\") " pod="kube-system/kube-proxy-pw492" Dec 13 01:34:06.263230 kubelet[2514]: E1213 01:34:06.263102 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:06.263946 containerd[1438]: time="2024-12-13T01:34:06.263897558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pw492,Uid:0fe67492-40d3-443f-8f02-19be05eb5f2a,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:06.266772 kubelet[2514]: E1213 01:34:06.266725 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:06.268212 containerd[1438]: time="2024-12-13T01:34:06.268020164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7v4ks,Uid:335a6109-27a0-4236-92c9-4c3177b88f96,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:34:06.314210 containerd[1438]: time="2024-12-13T01:34:06.313949992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:06.314210 containerd[1438]: time="2024-12-13T01:34:06.314024312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:06.314210 containerd[1438]: time="2024-12-13T01:34:06.314039232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:06.314210 containerd[1438]: time="2024-12-13T01:34:06.314132232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:06.315159 containerd[1438]: time="2024-12-13T01:34:06.314809913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:06.315159 containerd[1438]: time="2024-12-13T01:34:06.314861433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:06.315159 containerd[1438]: time="2024-12-13T01:34:06.314877633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:06.315159 containerd[1438]: time="2024-12-13T01:34:06.314953433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:06.347012 systemd[1]: Started cri-containerd-21184b80ac8b72880be2762a2963941864ddf590589068c8524a5b474755a269.scope - libcontainer container 21184b80ac8b72880be2762a2963941864ddf590589068c8524a5b474755a269. Dec 13 01:34:06.349028 systemd[1]: Started cri-containerd-6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d.scope - libcontainer container 6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d. Dec 13 01:34:06.378925 containerd[1438]: time="2024-12-13T01:34:06.378831048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pw492,Uid:0fe67492-40d3-443f-8f02-19be05eb5f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"21184b80ac8b72880be2762a2963941864ddf590589068c8524a5b474755a269\"" Dec 13 01:34:06.379515 kubelet[2514]: E1213 01:34:06.379495 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:06.381647 containerd[1438]: time="2024-12-13T01:34:06.381616892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7v4ks,Uid:335a6109-27a0-4236-92c9-4c3177b88f96,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\"" Dec 13 01:34:06.382500 kubelet[2514]: E1213 01:34:06.382481 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:06.384331 containerd[1438]: time="2024-12-13T01:34:06.384206696Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:34:06.384331 containerd[1438]: time="2024-12-13T01:34:06.384279656Z" level=info msg="CreateContainer within sandbox \"21184b80ac8b72880be2762a2963941864ddf590589068c8524a5b474755a269\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:34:06.396182 containerd[1438]: time="2024-12-13T01:34:06.396145553Z" level=info msg="CreateContainer within sandbox \"21184b80ac8b72880be2762a2963941864ddf590589068c8524a5b474755a269\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea92eb15ade34818d60a253ff0b2879a99b7f2bb3f8d09de1421c6b2e9207f22\"" Dec 13 01:34:06.397828 containerd[1438]: time="2024-12-13T01:34:06.397654796Z" level=info msg="StartContainer for \"ea92eb15ade34818d60a253ff0b2879a99b7f2bb3f8d09de1421c6b2e9207f22\"" Dec 13 01:34:06.418976 systemd[1]: Started cri-containerd-ea92eb15ade34818d60a253ff0b2879a99b7f2bb3f8d09de1421c6b2e9207f22.scope - libcontainer container ea92eb15ade34818d60a253ff0b2879a99b7f2bb3f8d09de1421c6b2e9207f22. Dec 13 01:34:06.443508 containerd[1438]: time="2024-12-13T01:34:06.443457703Z" level=info msg="StartContainer for \"ea92eb15ade34818d60a253ff0b2879a99b7f2bb3f8d09de1421c6b2e9207f22\" returns successfully" Dec 13 01:34:06.846780 kubelet[2514]: E1213 01:34:06.846722 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:06.856519 kubelet[2514]: I1213 01:34:06.856471 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pw492" podStartSLOduration=1.856453913 podStartE2EDuration="1.856453913s" podCreationTimestamp="2024-12-13 01:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:06.856230113 +0000 UTC m=+17.123657118" watchObservedRunningTime="2024-12-13 01:34:06.856453913 +0000 UTC m=+17.123880918" Dec 13 01:34:07.419713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058031257.mount: Deactivated successfully. Dec 13 01:34:07.447582 containerd[1438]: time="2024-12-13T01:34:07.447476265Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:07.449054 containerd[1438]: time="2024-12-13T01:34:07.449010427Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Dec 13 01:34:07.449629 containerd[1438]: time="2024-12-13T01:34:07.449591228Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:07.451980 containerd[1438]: time="2024-12-13T01:34:07.451942831Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:07.452884 containerd[1438]: time="2024-12-13T01:34:07.452855473Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.068613297s" Dec 13 01:34:07.452931 containerd[1438]: time="2024-12-13T01:34:07.452884993Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 01:34:07.455280 containerd[1438]: time="2024-12-13T01:34:07.455185756Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:34:07.465288 containerd[1438]: time="2024-12-13T01:34:07.465235850Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213\"" Dec 13 01:34:07.466062 containerd[1438]: time="2024-12-13T01:34:07.465865571Z" level=info msg="StartContainer for \"50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213\"" Dec 13 01:34:07.490222 systemd[1]: Started cri-containerd-50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213.scope - libcontainer container 50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213. Dec 13 01:34:07.518315 containerd[1438]: time="2024-12-13T01:34:07.518271363Z" level=info msg="StartContainer for \"50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213\" returns successfully" Dec 13 01:34:07.519874 systemd[1]: cri-containerd-50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213.scope: Deactivated successfully. Dec 13 01:34:07.563614 containerd[1438]: time="2024-12-13T01:34:07.563543426Z" level=info msg="shim disconnected" id=50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213 namespace=k8s.io Dec 13 01:34:07.563614 containerd[1438]: time="2024-12-13T01:34:07.563605466Z" level=warning msg="cleaning up after shim disconnected" id=50793d2b543bbbf57bbb91a649da913798e80d108fce4f0e8759d6d16e68c213 namespace=k8s.io Dec 13 01:34:07.563614 containerd[1438]: time="2024-12-13T01:34:07.563615626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:07.849361 kubelet[2514]: E1213 01:34:07.849231 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:07.851226 containerd[1438]: time="2024-12-13T01:34:07.851122504Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:34:08.901099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424410901.mount: Deactivated successfully. Dec 13 01:34:10.024821 containerd[1438]: time="2024-12-13T01:34:10.024747815Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.025381 containerd[1438]: time="2024-12-13T01:34:10.025338015Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Dec 13 01:34:10.026472 containerd[1438]: time="2024-12-13T01:34:10.025982056Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.029003 containerd[1438]: time="2024-12-13T01:34:10.028972180Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:10.030176 containerd[1438]: time="2024-12-13T01:34:10.030143501Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.178980277s" Dec 13 01:34:10.030219 containerd[1438]: time="2024-12-13T01:34:10.030180461Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 01:34:10.032434 containerd[1438]: time="2024-12-13T01:34:10.032347983Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:34:10.044517 containerd[1438]: time="2024-12-13T01:34:10.044281597Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075\"" Dec 13 01:34:10.046152 containerd[1438]: time="2024-12-13T01:34:10.046080319Z" level=info msg="StartContainer for \"512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075\"" Dec 13 01:34:10.078974 systemd[1]: Started cri-containerd-512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075.scope - libcontainer container 512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075. Dec 13 01:34:10.102965 systemd[1]: cri-containerd-512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075.scope: Deactivated successfully. Dec 13 01:34:10.126172 containerd[1438]: time="2024-12-13T01:34:10.126128810Z" level=info msg="StartContainer for \"512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075\" returns successfully" Dec 13 01:34:10.138663 kubelet[2514]: I1213 01:34:10.138635 2514 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:34:10.142221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075-rootfs.mount: Deactivated successfully. Dec 13 01:34:10.145349 containerd[1438]: time="2024-12-13T01:34:10.145288232Z" level=info msg="shim disconnected" id=512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075 namespace=k8s.io Dec 13 01:34:10.145470 containerd[1438]: time="2024-12-13T01:34:10.145392272Z" level=warning msg="cleaning up after shim disconnected" id=512c5e86754a5232a41b8fc198449daaef8287625d16feb86cd44a4d13799075 namespace=k8s.io Dec 13 01:34:10.145470 containerd[1438]: time="2024-12-13T01:34:10.145404792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:10.160820 kubelet[2514]: I1213 01:34:10.159667 2514 topology_manager.go:215] "Topology Admit Handler" podUID="2e6b87df-6246-450d-bc9b-cba1814d3bcd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ldgmm" Dec 13 01:34:10.160820 kubelet[2514]: I1213 01:34:10.160049 2514 topology_manager.go:215] "Topology Admit Handler" podUID="ff291dfd-cedc-4a1d-854c-a544c7263c0a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zctbw" Dec 13 01:34:10.185224 systemd[1]: Created slice kubepods-burstable-podff291dfd_cedc_4a1d_854c_a544c7263c0a.slice - libcontainer container kubepods-burstable-podff291dfd_cedc_4a1d_854c_a544c7263c0a.slice. Dec 13 01:34:10.189434 systemd[1]: Created slice kubepods-burstable-pod2e6b87df_6246_450d_bc9b_cba1814d3bcd.slice - libcontainer container kubepods-burstable-pod2e6b87df_6246_450d_bc9b_cba1814d3bcd.slice. Dec 13 01:34:10.331452 kubelet[2514]: I1213 01:34:10.331328 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e6b87df-6246-450d-bc9b-cba1814d3bcd-config-volume\") pod \"coredns-7db6d8ff4d-ldgmm\" (UID: \"2e6b87df-6246-450d-bc9b-cba1814d3bcd\") " pod="kube-system/coredns-7db6d8ff4d-ldgmm" Dec 13 01:34:10.331452 kubelet[2514]: I1213 01:34:10.331386 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq92j\" (UniqueName: \"kubernetes.io/projected/ff291dfd-cedc-4a1d-854c-a544c7263c0a-kube-api-access-gq92j\") pod \"coredns-7db6d8ff4d-zctbw\" (UID: \"ff291dfd-cedc-4a1d-854c-a544c7263c0a\") " pod="kube-system/coredns-7db6d8ff4d-zctbw" Dec 13 01:34:10.331452 kubelet[2514]: I1213 01:34:10.331408 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh25z\" (UniqueName: \"kubernetes.io/projected/2e6b87df-6246-450d-bc9b-cba1814d3bcd-kube-api-access-bh25z\") pod \"coredns-7db6d8ff4d-ldgmm\" (UID: \"2e6b87df-6246-450d-bc9b-cba1814d3bcd\") " pod="kube-system/coredns-7db6d8ff4d-ldgmm" Dec 13 01:34:10.331452 kubelet[2514]: I1213 01:34:10.331425 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff291dfd-cedc-4a1d-854c-a544c7263c0a-config-volume\") pod \"coredns-7db6d8ff4d-zctbw\" (UID: \"ff291dfd-cedc-4a1d-854c-a544c7263c0a\") " pod="kube-system/coredns-7db6d8ff4d-zctbw" Dec 13 01:34:10.488547 kubelet[2514]: E1213 01:34:10.488429 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:10.489299 containerd[1438]: time="2024-12-13T01:34:10.489260505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zctbw,Uid:ff291dfd-cedc-4a1d-854c-a544c7263c0a,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:10.492090 kubelet[2514]: E1213 01:34:10.491953 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:10.492461 containerd[1438]: time="2024-12-13T01:34:10.492320388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldgmm,Uid:2e6b87df-6246-450d-bc9b-cba1814d3bcd,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:10.579204 containerd[1438]: time="2024-12-13T01:34:10.579077607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldgmm,Uid:2e6b87df-6246-450d-bc9b-cba1814d3bcd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60e78fc92a2a45a25770dc4f99d24dc234f148c51dc147d101316e3c2ee939f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:34:10.580466 kubelet[2514]: E1213 01:34:10.579324 2514 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e78fc92a2a45a25770dc4f99d24dc234f148c51dc147d101316e3c2ee939f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:34:10.580466 kubelet[2514]: E1213 01:34:10.579393 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e78fc92a2a45a25770dc4f99d24dc234f148c51dc147d101316e3c2ee939f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-ldgmm" Dec 13 01:34:10.580466 kubelet[2514]: E1213 01:34:10.579429 2514 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e78fc92a2a45a25770dc4f99d24dc234f148c51dc147d101316e3c2ee939f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-ldgmm" Dec 13 01:34:10.580466 kubelet[2514]: E1213 01:34:10.579467 2514 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ldgmm_kube-system(2e6b87df-6246-450d-bc9b-cba1814d3bcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ldgmm_kube-system(2e6b87df-6246-450d-bc9b-cba1814d3bcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60e78fc92a2a45a25770dc4f99d24dc234f148c51dc147d101316e3c2ee939f7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-ldgmm" podUID="2e6b87df-6246-450d-bc9b-cba1814d3bcd" Dec 13 01:34:10.584470 containerd[1438]: time="2024-12-13T01:34:10.584388173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zctbw,Uid:ff291dfd-cedc-4a1d-854c-a544c7263c0a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"614fff7b86e3d8bb2426d17d69e116f6fa42b42c63bdf906d3f26db37deec0b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:34:10.584606 kubelet[2514]: E1213 01:34:10.584578 2514 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614fff7b86e3d8bb2426d17d69e116f6fa42b42c63bdf906d3f26db37deec0b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:34:10.584783 kubelet[2514]: E1213 01:34:10.584622 2514 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614fff7b86e3d8bb2426d17d69e116f6fa42b42c63bdf906d3f26db37deec0b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zctbw" Dec 13 01:34:10.584783 kubelet[2514]: E1213 01:34:10.584639 2514 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614fff7b86e3d8bb2426d17d69e116f6fa42b42c63bdf906d3f26db37deec0b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zctbw" Dec 13 01:34:10.584783 kubelet[2514]: E1213 01:34:10.584672 2514 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zctbw_kube-system(ff291dfd-cedc-4a1d-854c-a544c7263c0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zctbw_kube-system(ff291dfd-cedc-4a1d-854c-a544c7263c0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"614fff7b86e3d8bb2426d17d69e116f6fa42b42c63bdf906d3f26db37deec0b2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-zctbw" podUID="ff291dfd-cedc-4a1d-854c-a544c7263c0a" Dec 13 01:34:10.855463 kubelet[2514]: E1213 01:34:10.855374 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:10.857933 containerd[1438]: time="2024-12-13T01:34:10.857881965Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:34:10.871031 containerd[1438]: time="2024-12-13T01:34:10.870975020Z" level=info msg="CreateContainer within sandbox \"6251d22f4da9307fde07adf151f9a299336bcbcf4a33b6395520dbb74573246d\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a5af59907e6d179be48e042125c4caa71b5c598c41fca840de3b69895f4b9742\"" Dec 13 01:34:10.871622 containerd[1438]: time="2024-12-13T01:34:10.871510021Z" level=info msg="StartContainer for \"a5af59907e6d179be48e042125c4caa71b5c598c41fca840de3b69895f4b9742\"" Dec 13 01:34:10.896950 systemd[1]: Started cri-containerd-a5af59907e6d179be48e042125c4caa71b5c598c41fca840de3b69895f4b9742.scope - libcontainer container a5af59907e6d179be48e042125c4caa71b5c598c41fca840de3b69895f4b9742. Dec 13 01:34:10.923390 containerd[1438]: time="2024-12-13T01:34:10.923214840Z" level=info msg="StartContainer for \"a5af59907e6d179be48e042125c4caa71b5c598c41fca840de3b69895f4b9742\" returns successfully" Dec 13 01:34:11.859861 kubelet[2514]: E1213 01:34:11.859321 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:11.869353 kubelet[2514]: I1213 01:34:11.869249 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7v4ks" podStartSLOduration=3.22145573 podStartE2EDuration="6.869222338s" podCreationTimestamp="2024-12-13 01:34:05 +0000 UTC" firstStartedPulling="2024-12-13 01:34:06.383283814 +0000 UTC m=+16.650710819" lastFinishedPulling="2024-12-13 01:34:10.031050422 +0000 UTC m=+20.298477427" observedRunningTime="2024-12-13 01:34:11.868122057 +0000 UTC m=+22.135549062" watchObservedRunningTime="2024-12-13 01:34:11.869222338 +0000 UTC m=+22.136649343" Dec 13 01:34:12.019501 systemd-networkd[1386]: flannel.1: Link UP Dec 13 01:34:12.019511 systemd-networkd[1386]: flannel.1: Gained carrier Dec 13 01:34:12.860829 kubelet[2514]: E1213 01:34:12.860761 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:13.554903 systemd-networkd[1386]: flannel.1: Gained IPv6LL Dec 13 01:34:18.164144 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:60472.service - OpenSSH per-connection server daemon (10.0.0.1:60472). Dec 13 01:34:18.198141 sshd[3187]: Accepted publickey for core from 10.0.0.1 port 60472 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:18.199479 sshd[3187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:18.202875 systemd-logind[1423]: New session 6 of user core. Dec 13 01:34:18.208918 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:34:18.322581 sshd[3187]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:18.326050 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:60472.service: Deactivated successfully. Dec 13 01:34:18.328147 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:34:18.328753 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:34:18.329521 systemd-logind[1423]: Removed session 6. Dec 13 01:34:22.811577 kubelet[2514]: E1213 01:34:22.811526 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:22.812964 containerd[1438]: time="2024-12-13T01:34:22.812915737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zctbw,Uid:ff291dfd-cedc-4a1d-854c-a544c7263c0a,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:22.850889 systemd-networkd[1386]: cni0: Link UP Dec 13 01:34:22.850896 systemd-networkd[1386]: cni0: Gained carrier Dec 13 01:34:22.852644 systemd-networkd[1386]: cni0: Lost carrier Dec 13 01:34:22.858169 systemd-networkd[1386]: vetha2bb600b: Link UP Dec 13 01:34:22.860245 kernel: cni0: port 1(vetha2bb600b) entered blocking state Dec 13 01:34:22.860362 kernel: cni0: port 1(vetha2bb600b) entered disabled state Dec 13 01:34:22.860399 kernel: vetha2bb600b: entered allmulticast mode Dec 13 01:34:22.860416 kernel: vetha2bb600b: entered promiscuous mode Dec 13 01:34:22.861431 kernel: cni0: port 1(vetha2bb600b) entered blocking state Dec 13 01:34:22.861463 kernel: cni0: port 1(vetha2bb600b) entered forwarding state Dec 13 01:34:22.863072 kernel: cni0: port 1(vetha2bb600b) entered disabled state Dec 13 01:34:22.874599 kernel: cni0: port 1(vetha2bb600b) entered blocking state Dec 13 01:34:22.874674 kernel: cni0: port 1(vetha2bb600b) entered forwarding state Dec 13 01:34:22.874528 systemd-networkd[1386]: vetha2bb600b: Gained carrier Dec 13 01:34:22.874964 systemd-networkd[1386]: cni0: Gained carrier Dec 13 01:34:22.876623 containerd[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Dec 13 01:34:22.876623 containerd[1438]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:34:22.893574 containerd[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:34:22.893479060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:22.893574 containerd[1438]: time="2024-12-13T01:34:22.893550580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:22.893574 containerd[1438]: time="2024-12-13T01:34:22.893565980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:22.893734 containerd[1438]: time="2024-12-13T01:34:22.893649180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:22.913472 systemd[1]: Started cri-containerd-d74503cd631fc4e130d4a204949b8b26bfbb9a0aa47232ad2621203c3c5b43f1.scope - libcontainer container d74503cd631fc4e130d4a204949b8b26bfbb9a0aa47232ad2621203c3c5b43f1. Dec 13 01:34:22.925450 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:34:22.941571 containerd[1438]: time="2024-12-13T01:34:22.941526005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zctbw,Uid:ff291dfd-cedc-4a1d-854c-a544c7263c0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d74503cd631fc4e130d4a204949b8b26bfbb9a0aa47232ad2621203c3c5b43f1\"" Dec 13 01:34:22.942229 kubelet[2514]: E1213 01:34:22.942203 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:22.949142 containerd[1438]: time="2024-12-13T01:34:22.949007009Z" level=info msg="CreateContainer within sandbox \"d74503cd631fc4e130d4a204949b8b26bfbb9a0aa47232ad2621203c3c5b43f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:34:22.959460 containerd[1438]: time="2024-12-13T01:34:22.959342494Z" level=info msg="CreateContainer within sandbox \"d74503cd631fc4e130d4a204949b8b26bfbb9a0aa47232ad2621203c3c5b43f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35b08b41fe09a1b01935fed4ef8c83abb5d59ceed4577dcf8af05cba6233816f\"" Dec 13 01:34:22.959880 containerd[1438]: time="2024-12-13T01:34:22.959835455Z" level=info msg="StartContainer for \"35b08b41fe09a1b01935fed4ef8c83abb5d59ceed4577dcf8af05cba6233816f\"" Dec 13 01:34:22.982995 systemd[1]: Started cri-containerd-35b08b41fe09a1b01935fed4ef8c83abb5d59ceed4577dcf8af05cba6233816f.scope - libcontainer container 35b08b41fe09a1b01935fed4ef8c83abb5d59ceed4577dcf8af05cba6233816f. Dec 13 01:34:23.008836 containerd[1438]: time="2024-12-13T01:34:23.008652720Z" level=info msg="StartContainer for \"35b08b41fe09a1b01935fed4ef8c83abb5d59ceed4577dcf8af05cba6233816f\" returns successfully" Dec 13 01:34:23.337363 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:58468.service - OpenSSH per-connection server daemon (10.0.0.1:58468). Dec 13 01:34:23.383058 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 58468 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:23.385091 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:23.389311 systemd-logind[1423]: New session 7 of user core. Dec 13 01:34:23.398960 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:34:23.519462 sshd[3344]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:23.523195 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:58468.service: Deactivated successfully. Dec 13 01:34:23.525245 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:34:23.526240 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:34:23.527182 systemd-logind[1423]: Removed session 7. Dec 13 01:34:23.811884 kubelet[2514]: E1213 01:34:23.811533 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:23.812881 containerd[1438]: time="2024-12-13T01:34:23.812649477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldgmm,Uid:2e6b87df-6246-450d-bc9b-cba1814d3bcd,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:23.843332 systemd-networkd[1386]: veth7837127b: Link UP Dec 13 01:34:23.844900 kernel: cni0: port 2(veth7837127b) entered blocking state Dec 13 01:34:23.844954 kernel: cni0: port 2(veth7837127b) entered disabled state Dec 13 01:34:23.844969 kernel: veth7837127b: entered allmulticast mode Dec 13 01:34:23.846176 kernel: veth7837127b: entered promiscuous mode Dec 13 01:34:23.847197 kernel: cni0: port 2(veth7837127b) entered blocking state Dec 13 01:34:23.847250 kernel: cni0: port 2(veth7837127b) entered forwarding state Dec 13 01:34:23.855920 kernel: cni0: port 2(veth7837127b) entered disabled state Dec 13 01:34:23.855996 kernel: cni0: port 2(veth7837127b) entered blocking state Dec 13 01:34:23.856013 kernel: cni0: port 2(veth7837127b) entered forwarding state Dec 13 01:34:23.856696 systemd-networkd[1386]: veth7837127b: Gained carrier Dec 13 01:34:23.858537 containerd[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Dec 13 01:34:23.858537 containerd[1438]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:34:23.881526 containerd[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:34:23.880550990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:23.881526 containerd[1438]: time="2024-12-13T01:34:23.880605470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:23.881526 containerd[1438]: time="2024-12-13T01:34:23.880619790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:23.882710 kubelet[2514]: E1213 01:34:23.882325 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:23.882837 containerd[1438]: time="2024-12-13T01:34:23.882644311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:23.896075 kubelet[2514]: I1213 01:34:23.895166 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zctbw" podStartSLOduration=18.895153037 podStartE2EDuration="18.895153037s" podCreationTimestamp="2024-12-13 01:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:23.895041317 +0000 UTC m=+34.162468322" watchObservedRunningTime="2024-12-13 01:34:23.895153037 +0000 UTC m=+34.162580042" Dec 13 01:34:23.907062 systemd[1]: Started cri-containerd-1ad2bb2536caa9cbb9eb74301466ea2c392e0064200cf9b1c86ab5fe840d721f.scope - libcontainer container 1ad2bb2536caa9cbb9eb74301466ea2c392e0064200cf9b1c86ab5fe840d721f. Dec 13 01:34:23.930037 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:34:23.949508 containerd[1438]: time="2024-12-13T01:34:23.949344384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldgmm,Uid:2e6b87df-6246-450d-bc9b-cba1814d3bcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad2bb2536caa9cbb9eb74301466ea2c392e0064200cf9b1c86ab5fe840d721f\"" Dec 13 01:34:23.950683 kubelet[2514]: E1213 01:34:23.950413 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:23.954145 containerd[1438]: time="2024-12-13T01:34:23.954085866Z" level=info msg="CreateContainer within sandbox \"1ad2bb2536caa9cbb9eb74301466ea2c392e0064200cf9b1c86ab5fe840d721f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:34:23.969812 containerd[1438]: time="2024-12-13T01:34:23.969715954Z" level=info msg="CreateContainer within sandbox \"1ad2bb2536caa9cbb9eb74301466ea2c392e0064200cf9b1c86ab5fe840d721f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca0027bdea8b36f3f9998964b5c0186930b3b8454a3e019bcaf5d0614ab9d351\"" Dec 13 01:34:23.970242 containerd[1438]: time="2024-12-13T01:34:23.970213834Z" level=info msg="StartContainer for \"ca0027bdea8b36f3f9998964b5c0186930b3b8454a3e019bcaf5d0614ab9d351\"" Dec 13 01:34:23.999098 systemd[1]: Started cri-containerd-ca0027bdea8b36f3f9998964b5c0186930b3b8454a3e019bcaf5d0614ab9d351.scope - libcontainer container ca0027bdea8b36f3f9998964b5c0186930b3b8454a3e019bcaf5d0614ab9d351. Dec 13 01:34:24.026866 containerd[1438]: time="2024-12-13T01:34:24.026677061Z" level=info msg="StartContainer for \"ca0027bdea8b36f3f9998964b5c0186930b3b8454a3e019bcaf5d0614ab9d351\" returns successfully" Dec 13 01:34:24.435152 systemd-networkd[1386]: vetha2bb600b: Gained IPv6LL Dec 13 01:34:24.754940 systemd-networkd[1386]: cni0: Gained IPv6LL Dec 13 01:34:24.889759 kubelet[2514]: E1213 01:34:24.889713 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:24.891169 kubelet[2514]: E1213 01:34:24.890483 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:24.903073 kubelet[2514]: I1213 01:34:24.902837 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ldgmm" podStartSLOduration=19.902820667 podStartE2EDuration="19.902820667s" podCreationTimestamp="2024-12-13 01:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:24.902798787 +0000 UTC m=+35.170225952" watchObservedRunningTime="2024-12-13 01:34:24.902820667 +0000 UTC m=+35.170247672" Dec 13 01:34:25.266948 systemd-networkd[1386]: veth7837127b: Gained IPv6LL Dec 13 01:34:25.891327 kubelet[2514]: E1213 01:34:25.891201 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:25.891327 kubelet[2514]: E1213 01:34:25.891232 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:26.892802 kubelet[2514]: E1213 01:34:26.892674 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:34:28.533091 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Dec 13 01:34:28.569438 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:28.570667 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:28.574162 systemd-logind[1423]: New session 8 of user core. Dec 13 01:34:28.582988 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:34:28.688341 sshd[3498]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:28.697133 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:58478.service: Deactivated successfully. Dec 13 01:34:28.699082 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:34:28.700215 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:34:28.706124 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Dec 13 01:34:28.707821 systemd-logind[1423]: Removed session 8. Dec 13 01:34:28.740332 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:28.741491 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:28.744906 systemd-logind[1423]: New session 9 of user core. Dec 13 01:34:28.751944 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:34:28.895668 sshd[3513]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:28.903246 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:58488.service: Deactivated successfully. Dec 13 01:34:28.904671 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:34:28.907263 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:34:28.916043 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:58494.service - OpenSSH per-connection server daemon (10.0.0.1:58494). Dec 13 01:34:28.917167 systemd-logind[1423]: Removed session 9. Dec 13 01:34:28.944643 sshd[3526]: Accepted publickey for core from 10.0.0.1 port 58494 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:28.945756 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:28.949348 systemd-logind[1423]: New session 10 of user core. Dec 13 01:34:28.957957 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:34:29.062578 sshd[3526]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:29.065012 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:58494.service: Deactivated successfully. Dec 13 01:34:29.066442 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:34:29.068888 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:34:29.069805 systemd-logind[1423]: Removed session 10. Dec 13 01:34:34.073123 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Dec 13 01:34:34.107884 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:34.109207 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:34.112903 systemd-logind[1423]: New session 11 of user core. Dec 13 01:34:34.121913 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:34:34.226938 sshd[3563]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:34.237344 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:54588.service: Deactivated successfully. Dec 13 01:34:34.238829 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:34:34.240080 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:34:34.241218 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:54596.service - OpenSSH per-connection server daemon (10.0.0.1:54596). Dec 13 01:34:34.242148 systemd-logind[1423]: Removed session 11. Dec 13 01:34:34.274922 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 54596 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:34.275385 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:34.279642 systemd-logind[1423]: New session 12 of user core. Dec 13 01:34:34.291932 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:34:34.493673 sshd[3578]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:34.505375 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:54596.service: Deactivated successfully. Dec 13 01:34:34.506914 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:34:34.508127 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:34:34.509420 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:54600.service - OpenSSH per-connection server daemon (10.0.0.1:54600). Dec 13 01:34:34.511078 systemd-logind[1423]: Removed session 12. Dec 13 01:34:34.545718 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 54600 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:34.546965 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:34.551088 systemd-logind[1423]: New session 13 of user core. Dec 13 01:34:34.564958 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:34:35.735304 sshd[3591]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:35.742740 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:54600.service: Deactivated successfully. Dec 13 01:34:35.746738 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:34:35.748432 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:34:35.760215 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:54604.service - OpenSSH per-connection server daemon (10.0.0.1:54604). Dec 13 01:34:35.763156 systemd-logind[1423]: Removed session 13. Dec 13 01:34:35.793008 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 54604 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:35.794402 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:35.797959 systemd-logind[1423]: New session 14 of user core. Dec 13 01:34:35.808925 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:34:36.013808 sshd[3611]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:36.021373 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:54604.service: Deactivated successfully. Dec 13 01:34:36.023329 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:34:36.025336 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:34:36.037083 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618). Dec 13 01:34:36.038212 systemd-logind[1423]: Removed session 14. Dec 13 01:34:36.067135 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:36.067632 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:36.071271 systemd-logind[1423]: New session 15 of user core. Dec 13 01:34:36.077959 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:34:36.186190 sshd[3623]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:36.189413 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:54618.service: Deactivated successfully. Dec 13 01:34:36.191072 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:34:36.191641 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:34:36.192498 systemd-logind[1423]: Removed session 15. Dec 13 01:34:41.197420 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Dec 13 01:34:41.234542 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:41.235028 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:41.238951 systemd-logind[1423]: New session 16 of user core. Dec 13 01:34:41.247984 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:34:41.363609 sshd[3663]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:41.367639 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:54626.service: Deactivated successfully. Dec 13 01:34:41.375407 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:34:41.376113 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:34:41.377033 systemd-logind[1423]: Removed session 16. Dec 13 01:34:46.375107 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:50896.service - OpenSSH per-connection server daemon (10.0.0.1:50896). Dec 13 01:34:46.409428 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 50896 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:46.410615 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:46.414018 systemd-logind[1423]: New session 17 of user core. Dec 13 01:34:46.422939 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:34:46.527682 sshd[3698]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:46.530857 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:50896.service: Deactivated successfully. Dec 13 01:34:46.533287 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:34:46.534613 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:34:46.535913 systemd-logind[1423]: Removed session 17. Dec 13 01:34:51.538575 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:50898.service - OpenSSH per-connection server daemon (10.0.0.1:50898). Dec 13 01:34:51.570794 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:34:51.571929 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:51.575639 systemd-logind[1423]: New session 18 of user core. Dec 13 01:34:51.584902 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:34:51.689950 sshd[3736]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:51.692804 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:50898.service: Deactivated successfully. Dec 13 01:34:51.694485 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:34:51.695230 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:34:51.696058 systemd-logind[1423]: Removed session 18.