Dec 13 13:18:24.875195 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:18:24.875214 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:18:24.875224 kernel: KASLR enabled Dec 13 13:18:24.875230 kernel: efi: EFI v2.7 by EDK II Dec 13 13:18:24.875235 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:18:24.875240 kernel: random: crng init done Dec 13 13:18:24.875247 kernel: secureboot: Secure boot disabled Dec 13 13:18:24.875253 kernel: ACPI: Early table checksum verification disabled Dec 13 13:18:24.875259 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:18:24.875265 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:18:24.875271 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875277 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875282 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875288 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875295 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875302 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875308 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875314 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875320 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:18:24.875326 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:18:24.875332 kernel: NUMA: Failed to initialise from firmware Dec 13 13:18:24.875338 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:18:24.875344 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 13:18:24.875350 kernel: Zone ranges: Dec 13 13:18:24.875356 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:18:24.875363 kernel: DMA32 empty Dec 13 13:18:24.875369 kernel: Normal empty Dec 13 13:18:24.875375 kernel: Movable zone start for each node Dec 13 13:18:24.875381 kernel: Early memory node ranges Dec 13 13:18:24.875387 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:18:24.875393 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:18:24.875399 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:18:24.875405 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:18:24.875411 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:18:24.875417 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:18:24.875423 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:18:24.875429 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:18:24.875436 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:18:24.875442 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:18:24.875448 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:18:24.875457 kernel: psci: probing for conduit method from ACPI. Dec 13 13:18:24.875463 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:18:24.875469 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:18:24.875477 kernel: psci: Trusted OS migration not required Dec 13 13:18:24.875483 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:18:24.875489 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:18:24.875496 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:18:24.875502 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:18:24.875513 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:18:24.875520 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:18:24.875526 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:18:24.875533 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:18:24.875539 kernel: CPU features: detected: Spectre-v4 Dec 13 13:18:24.875546 kernel: CPU features: detected: Spectre-BHB Dec 13 13:18:24.875553 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:18:24.875559 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:18:24.875565 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:18:24.875572 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:18:24.875578 kernel: alternatives: applying boot alternatives Dec 13 13:18:24.875585 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:18:24.875592 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:18:24.875599 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:18:24.875605 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:18:24.875612 kernel: Fallback order for Node 0: 0 Dec 13 13:18:24.875620 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:18:24.875626 kernel: Policy zone: DMA Dec 13 13:18:24.875632 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:18:24.875639 kernel: software IO TLB: area num 4. Dec 13 13:18:24.875645 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:18:24.875652 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Dec 13 13:18:24.875658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:18:24.875665 kernel: trace event string verifier disabled Dec 13 13:18:24.875671 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:18:24.875678 kernel: rcu: RCU event tracing is enabled. Dec 13 13:18:24.875685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:18:24.875691 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:18:24.875698 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:18:24.875705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:18:24.875712 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:18:24.875718 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:18:24.875724 kernel: GICv3: 256 SPIs implemented Dec 13 13:18:24.875730 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:18:24.875737 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:18:24.875743 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:18:24.875749 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:18:24.875756 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:18:24.875762 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:18:24.875770 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:18:24.875783 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:18:24.875790 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:18:24.875797 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:18:24.875803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:18:24.875809 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:18:24.875816 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:18:24.875822 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:18:24.875829 kernel: arm-pv: using stolen time PV Dec 13 13:18:24.875836 kernel: Console: colour dummy device 80x25 Dec 13 13:18:24.875842 kernel: ACPI: Core revision 20230628 Dec 13 13:18:24.875850 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:18:24.875857 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:18:24.875863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:18:24.875870 kernel: landlock: Up and running. Dec 13 13:18:24.875876 kernel: SELinux: Initializing. Dec 13 13:18:24.875883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:18:24.875889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:18:24.875896 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:18:24.875918 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:18:24.875927 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:18:24.875933 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:18:24.875940 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:18:24.875946 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:18:24.875953 kernel: Remapping and enabling EFI services. Dec 13 13:18:24.875959 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:18:24.875966 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:18:24.875973 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:18:24.875979 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:18:24.875987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:18:24.875994 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:18:24.876019 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:18:24.876027 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:18:24.876034 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:18:24.876041 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:18:24.876048 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:18:24.876055 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:18:24.876062 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:18:24.876070 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:18:24.876077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:18:24.876084 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:18:24.876091 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:18:24.876098 kernel: SMP: Total of 4 processors activated. Dec 13 13:18:24.876105 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:18:24.876112 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:18:24.876119 kernel: CPU features: detected: Common not Private translations Dec 13 13:18:24.876125 kernel: CPU features: detected: CRC32 instructions Dec 13 13:18:24.876134 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:18:24.876141 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:18:24.876147 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:18:24.876154 kernel: CPU features: detected: Privileged Access Never Dec 13 13:18:24.876161 kernel: CPU features: detected: RAS Extension Support Dec 13 13:18:24.876168 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:18:24.876175 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:18:24.876182 kernel: alternatives: applying system-wide alternatives Dec 13 13:18:24.876189 kernel: devtmpfs: initialized Dec 13 13:18:24.876197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:18:24.876204 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:18:24.876211 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:18:24.876218 kernel: SMBIOS 3.0.0 present. Dec 13 13:18:24.876224 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:18:24.876231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:18:24.876238 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:18:24.876245 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:18:24.876252 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:18:24.876260 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:18:24.876267 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 13 13:18:24.876274 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:18:24.876281 kernel: cpuidle: using governor menu Dec 13 13:18:24.876288 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:18:24.876295 kernel: ASID allocator initialised with 32768 entries Dec 13 13:18:24.876302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:18:24.876309 kernel: Serial: AMBA PL011 UART driver Dec 13 13:18:24.876316 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:18:24.876324 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:18:24.876330 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:18:24.876337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:18:24.876344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:18:24.876351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:18:24.876358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:18:24.876365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:18:24.876372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:18:24.876379 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:18:24.876387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:18:24.876394 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:18:24.876401 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:18:24.876407 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:18:24.876414 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:18:24.876421 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:18:24.876428 kernel: ACPI: Interpreter enabled Dec 13 13:18:24.876435 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:18:24.876442 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:18:24.876450 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:18:24.876457 kernel: printk: console [ttyAMA0] enabled Dec 13 13:18:24.876464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:18:24.876587 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:18:24.876655 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:18:24.876719 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:18:24.876789 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:18:24.876853 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:18:24.876862 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:18:24.876869 kernel: PCI host bridge to bus 0000:00 Dec 13 13:18:24.876996 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:18:24.877059 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:18:24.877116 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:18:24.877170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:18:24.877249 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:18:24.877320 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:18:24.877383 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:18:24.877445 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:18:24.877506 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:18:24.877567 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:18:24.877628 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:18:24.877691 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:18:24.877745 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:18:24.877810 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:18:24.877865 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:18:24.877874 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:18:24.877881 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:18:24.877889 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:18:24.877897 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:18:24.877915 kernel: iommu: Default domain type: Translated Dec 13 13:18:24.877922 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:18:24.877929 kernel: efivars: Registered efivars operations Dec 13 13:18:24.877936 kernel: vgaarb: loaded Dec 13 13:18:24.877943 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:18:24.877949 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:18:24.877957 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:18:24.877964 kernel: pnp: PnP ACPI init Dec 13 13:18:24.878038 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:18:24.878051 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:18:24.878058 kernel: NET: Registered PF_INET protocol family Dec 13 13:18:24.878065 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:18:24.878072 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:18:24.878079 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:18:24.878086 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:18:24.878093 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:18:24.878101 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:18:24.878109 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:18:24.878116 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:18:24.878123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:18:24.878130 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:18:24.878137 kernel: kvm [1]: HYP mode not available Dec 13 13:18:24.878144 kernel: Initialise system trusted keyrings Dec 13 13:18:24.878151 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:18:24.878158 kernel: Key type asymmetric registered Dec 13 13:18:24.878164 kernel: Asymmetric key parser 'x509' registered Dec 13 13:18:24.878172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:18:24.878179 kernel: io scheduler mq-deadline registered Dec 13 13:18:24.878186 kernel: io scheduler kyber registered Dec 13 13:18:24.878193 kernel: io scheduler bfq registered Dec 13 13:18:24.878200 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:18:24.878207 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:18:24.878214 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:18:24.878277 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:18:24.878286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:18:24.878295 kernel: thunder_xcv, ver 1.0 Dec 13 13:18:24.878301 kernel: thunder_bgx, ver 1.0 Dec 13 13:18:24.878308 kernel: nicpf, ver 1.0 Dec 13 13:18:24.878315 kernel: nicvf, ver 1.0 Dec 13 13:18:24.878383 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:18:24.878441 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:18:24 UTC (1734095904) Dec 13 13:18:24.878450 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:18:24.878457 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:18:24.878466 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:18:24.878472 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:18:24.878479 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:18:24.878486 kernel: Segment Routing with IPv6 Dec 13 13:18:24.878493 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:18:24.878500 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:18:24.878507 kernel: Key type dns_resolver registered Dec 13 13:18:24.878514 kernel: registered taskstats version 1 Dec 13 13:18:24.878521 kernel: Loading compiled-in X.509 certificates Dec 13 13:18:24.878529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:18:24.878536 kernel: Key type .fscrypt registered Dec 13 13:18:24.878542 kernel: Key type fscrypt-provisioning registered Dec 13 13:18:24.878549 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:18:24.878556 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:18:24.878563 kernel: ima: No architecture policies found Dec 13 13:18:24.878570 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:18:24.878577 kernel: clk: Disabling unused clocks Dec 13 13:18:24.878584 kernel: Freeing unused kernel memory: 39936K Dec 13 13:18:24.878592 kernel: Run /init as init process Dec 13 13:18:24.878599 kernel: with arguments: Dec 13 13:18:24.878605 kernel: /init Dec 13 13:18:24.878612 kernel: with environment: Dec 13 13:18:24.878619 kernel: HOME=/ Dec 13 13:18:24.878626 kernel: TERM=linux Dec 13 13:18:24.878632 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:18:24.878641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:18:24.878651 systemd[1]: Detected virtualization kvm. Dec 13 13:18:24.878659 systemd[1]: Detected architecture arm64. Dec 13 13:18:24.878666 systemd[1]: Running in initrd. Dec 13 13:18:24.878673 systemd[1]: No hostname configured, using default hostname. Dec 13 13:18:24.878680 systemd[1]: Hostname set to . Dec 13 13:18:24.878688 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:18:24.878695 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:18:24.878703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:18:24.878711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:18:24.878719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:18:24.878727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:18:24.878734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:18:24.878742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:18:24.878751 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:18:24.878759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:18:24.878767 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:18:24.878783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:18:24.878791 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:18:24.878798 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:18:24.878805 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:18:24.878813 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:18:24.878820 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:18:24.878828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:18:24.878836 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:18:24.878844 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:18:24.878851 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:18:24.878859 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:18:24.878866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:18:24.878873 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:18:24.878881 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:18:24.878889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:18:24.878897 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:18:24.878913 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:18:24.878920 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:18:24.878928 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:18:24.878935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:18:24.878943 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:18:24.878950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:18:24.878957 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:18:24.878967 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:18:24.878975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:18:24.878999 systemd-journald[239]: Collecting audit messages is disabled. Dec 13 13:18:24.879018 systemd-journald[239]: Journal started Dec 13 13:18:24.879040 systemd-journald[239]: Runtime Journal (/run/log/journal/b058935bee6440d39224dc51c45b7bd7) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:18:24.884031 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:18:24.884053 kernel: Bridge firewalling registered Dec 13 13:18:24.866453 systemd-modules-load[240]: Inserted module 'overlay' Dec 13 13:18:24.882927 systemd-modules-load[240]: Inserted module 'br_netfilter' Dec 13 13:18:24.887676 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:18:24.887693 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:18:24.889935 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:18:24.890874 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:18:24.892305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:18:24.914126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:18:24.915531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:18:24.917048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:18:24.925172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:18:24.927809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:18:24.930076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:18:24.932207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:18:24.933758 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:18:24.946032 dracut-cmdline[277]: dracut-dracut-053 Dec 13 13:18:24.948350 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:18:24.960673 systemd-resolved[275]: Positive Trust Anchors: Dec 13 13:18:24.960693 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:18:24.960724 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:18:24.965429 systemd-resolved[275]: Defaulting to hostname 'linux'. Dec 13 13:18:24.966982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:18:24.967802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:18:25.016936 kernel: SCSI subsystem initialized Dec 13 13:18:25.021916 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:18:25.028942 kernel: iscsi: registered transport (tcp) Dec 13 13:18:25.041926 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:18:25.041970 kernel: QLogic iSCSI HBA Driver Dec 13 13:18:25.083263 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:18:25.094058 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:18:25.109934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:18:25.109981 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:18:25.110965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:18:25.155943 kernel: raid6: neonx8 gen() 15804 MB/s Dec 13 13:18:25.172921 kernel: raid6: neonx4 gen() 15811 MB/s Dec 13 13:18:25.189917 kernel: raid6: neonx2 gen() 13166 MB/s Dec 13 13:18:25.206916 kernel: raid6: neonx1 gen() 10520 MB/s Dec 13 13:18:25.223917 kernel: raid6: int64x8 gen() 6771 MB/s Dec 13 13:18:25.240917 kernel: raid6: int64x4 gen() 7318 MB/s Dec 13 13:18:25.257917 kernel: raid6: int64x2 gen() 6109 MB/s Dec 13 13:18:25.274917 kernel: raid6: int64x1 gen() 5059 MB/s Dec 13 13:18:25.274930 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s Dec 13 13:18:25.291922 kernel: raid6: .... xor() 12457 MB/s, rmw enabled Dec 13 13:18:25.291935 kernel: raid6: using neon recovery algorithm Dec 13 13:18:25.296919 kernel: xor: measuring software checksum speed Dec 13 13:18:25.296936 kernel: 8regs : 21653 MB/sec Dec 13 13:18:25.296946 kernel: 32regs : 19816 MB/sec Dec 13 13:18:25.298212 kernel: arm64_neon : 27889 MB/sec Dec 13 13:18:25.298226 kernel: xor: using function: arm64_neon (27889 MB/sec) Dec 13 13:18:25.348129 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:18:25.359147 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:18:25.371090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:18:25.383491 systemd-udevd[461]: Using default interface naming scheme 'v255'. Dec 13 13:18:25.386589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:18:25.397083 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:18:25.408160 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Dec 13 13:18:25.434980 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:18:25.444100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:18:25.480861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:18:25.490078 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:18:25.502264 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:18:25.504298 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:18:25.506012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:18:25.507758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:18:25.517405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:18:25.519948 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:18:25.527168 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:18:25.527272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:18:25.527284 kernel: GPT:9289727 != 19775487 Dec 13 13:18:25.527293 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:18:25.527309 kernel: GPT:9289727 != 19775487 Dec 13 13:18:25.527318 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:18:25.527328 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:18:25.530429 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:18:25.535670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:18:25.535794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:18:25.540778 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:18:25.541941 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:18:25.542074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:18:25.543972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:18:25.552929 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Dec 13 13:18:25.553187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:18:25.556944 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (523) Dec 13 13:18:25.561788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:18:25.565897 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:18:25.566973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:18:25.577371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:18:25.580937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:18:25.581864 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:18:25.599077 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:18:25.601030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:18:25.604522 disk-uuid[551]: Primary Header is updated. Dec 13 13:18:25.604522 disk-uuid[551]: Secondary Entries is updated. Dec 13 13:18:25.604522 disk-uuid[551]: Secondary Header is updated. Dec 13 13:18:25.607925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:18:25.625702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:18:26.617937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:18:26.618254 disk-uuid[552]: The operation has completed successfully. Dec 13 13:18:26.644531 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:18:26.644623 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:18:26.662078 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:18:26.664611 sh[574]: Success Dec 13 13:18:26.677919 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:18:26.716293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:18:26.717824 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:18:26.719953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:18:26.729915 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:18:26.729950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:18:26.729964 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:18:26.729974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:18:26.730145 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:18:26.733576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:18:26.734663 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:18:26.735391 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:18:26.737296 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:18:26.747941 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:18:26.747977 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:18:26.748924 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:18:26.750937 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:18:26.757589 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:18:26.758926 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:18:26.763937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:18:26.769041 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:18:26.825612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:18:26.834045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:18:26.857983 ignition[671]: Ignition 2.20.0 Dec 13 13:18:26.857994 ignition[671]: Stage: fetch-offline Dec 13 13:18:26.859884 ignition[671]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:26.859898 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:26.860058 ignition[671]: parsed url from cmdline: "" Dec 13 13:18:26.860061 ignition[671]: no config URL provided Dec 13 13:18:26.860066 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:18:26.860073 ignition[671]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:18:26.862983 systemd-networkd[763]: lo: Link UP Dec 13 13:18:26.860099 ignition[671]: op(1): [started] loading QEMU firmware config module Dec 13 13:18:26.862986 systemd-networkd[763]: lo: Gained carrier Dec 13 13:18:26.860104 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:18:26.863808 systemd-networkd[763]: Enumeration completed Dec 13 13:18:26.864415 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:18:26.865857 systemd[1]: Reached target network.target - Network. Dec 13 13:18:26.867459 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:18:26.867462 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:18:26.872243 ignition[671]: op(1): [finished] loading QEMU firmware config module Dec 13 13:18:26.868380 systemd-networkd[763]: eth0: Link UP Dec 13 13:18:26.868383 systemd-networkd[763]: eth0: Gained carrier Dec 13 13:18:26.868389 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:18:26.892954 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:18:26.897669 ignition[671]: parsing config with SHA512: 237864647a5c26ca316b0fbc9e5299ec13bcd4862d2bf251887e02d0d0eacc9e86f89b59936799a0fec02e32b81aac8401d33fd3c9ea2037e962a8a953aa4b2d Dec 13 13:18:26.902259 unknown[671]: fetched base config from "system" Dec 13 13:18:26.902268 unknown[671]: fetched user config from "qemu" Dec 13 13:18:26.902718 ignition[671]: fetch-offline: fetch-offline passed Dec 13 13:18:26.903027 ignition[671]: Ignition finished successfully Dec 13 13:18:26.904290 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:18:26.905793 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:18:26.919035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:18:26.929188 ignition[769]: Ignition 2.20.0 Dec 13 13:18:26.929197 ignition[769]: Stage: kargs Dec 13 13:18:26.929349 ignition[769]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:26.929358 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:26.930189 ignition[769]: kargs: kargs passed Dec 13 13:18:26.932337 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:18:26.930231 ignition[769]: Ignition finished successfully Dec 13 13:18:26.945038 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:18:26.954063 ignition[779]: Ignition 2.20.0 Dec 13 13:18:26.954072 ignition[779]: Stage: disks Dec 13 13:18:26.954213 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:26.954222 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:26.955049 ignition[779]: disks: disks passed Dec 13 13:18:26.956312 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:18:26.955090 ignition[779]: Ignition finished successfully Dec 13 13:18:26.957304 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:18:26.958263 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:18:26.959708 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:18:26.960830 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:18:26.962410 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:18:26.964603 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:18:26.978205 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:18:26.981824 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:18:26.991019 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:18:27.036614 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:18:27.037937 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:18:27.037830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:18:27.047977 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:18:27.049838 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:18:27.050848 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:18:27.050882 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:18:27.050915 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:18:27.055891 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:18:27.057446 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:18:27.060926 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Dec 13 13:18:27.063201 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:18:27.063229 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:18:27.063239 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:18:27.064915 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:18:27.066384 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:18:27.097553 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:18:27.101499 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:18:27.105185 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:18:27.107998 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:18:27.176968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:18:27.185019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:18:27.186320 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:18:27.190928 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:18:27.204049 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:18:27.207163 ignition[912]: INFO : Ignition 2.20.0 Dec 13 13:18:27.207163 ignition[912]: INFO : Stage: mount Dec 13 13:18:27.208342 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:27.208342 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:27.208342 ignition[912]: INFO : mount: mount passed Dec 13 13:18:27.208342 ignition[912]: INFO : Ignition finished successfully Dec 13 13:18:27.209333 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:18:27.219008 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:18:27.728157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:18:27.737076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:18:27.743434 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Dec 13 13:18:27.743470 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:18:27.743481 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:18:27.744171 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:18:27.746930 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:18:27.747568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:18:27.763152 ignition[943]: INFO : Ignition 2.20.0 Dec 13 13:18:27.763152 ignition[943]: INFO : Stage: files Dec 13 13:18:27.764664 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:27.764664 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:27.764664 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:18:27.768018 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:18:27.768018 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:18:27.768018 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:18:27.768018 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:18:27.768018 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:18:27.767474 unknown[943]: wrote ssh authorized keys file for user: core Dec 13 13:18:27.774868 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:18:27.774868 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:18:27.815671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:18:27.993263 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:18:27.993263 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 13:18:27.996820 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 13:18:28.123036 systemd-networkd[763]: eth0: Gained IPv6LL Dec 13 13:18:28.235427 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:18:28.435560 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 13:18:28.435560 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:18:28.438741 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:18:28.460238 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:18:28.463448 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:18:28.464773 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:18:28.464773 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:18:28.464773 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:18:28.464773 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:18:28.464773 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:18:28.464773 ignition[943]: INFO : files: files passed Dec 13 13:18:28.464773 ignition[943]: INFO : Ignition finished successfully Dec 13 13:18:28.466971 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:18:28.478043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:18:28.480187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:18:28.482227 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:18:28.482315 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:18:28.486266 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:18:28.489533 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:18:28.489533 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:18:28.492325 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:18:28.494316 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:18:28.495589 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:18:28.514097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:18:28.531064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:18:28.531174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:18:28.534211 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:18:28.535694 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:18:28.537152 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:18:28.545040 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:18:28.555680 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:18:28.557750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:18:28.568142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:18:28.569164 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:18:28.570683 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:18:28.571970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:18:28.572081 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:18:28.573948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:18:28.575466 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:18:28.576619 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:18:28.577923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:18:28.579320 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:18:28.580705 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:18:28.582240 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:18:28.583804 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:18:28.585412 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:18:28.586840 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:18:28.587972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:18:28.588084 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:18:28.589792 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:18:28.591196 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:18:28.592592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:18:28.596001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:18:28.597803 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:18:28.597928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:18:28.599965 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:18:28.600087 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:18:28.601692 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:18:28.602967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:18:28.609990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:18:28.611007 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:18:28.612564 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:18:28.613664 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:18:28.613751 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:18:28.614839 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:18:28.614925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:18:28.616111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:18:28.616212 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:18:28.617464 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:18:28.617558 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:18:28.631127 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:18:28.631775 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:18:28.631888 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:18:28.634611 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:18:28.635312 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:18:28.635425 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:18:28.636775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:18:28.636868 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:18:28.642723 ignition[998]: INFO : Ignition 2.20.0 Dec 13 13:18:28.642723 ignition[998]: INFO : Stage: umount Dec 13 13:18:28.644163 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:18:28.644163 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:18:28.644163 ignition[998]: INFO : umount: umount passed Dec 13 13:18:28.644163 ignition[998]: INFO : Ignition finished successfully Dec 13 13:18:28.643561 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:18:28.643697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:18:28.645824 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:18:28.646273 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:18:28.646354 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:18:28.648825 systemd[1]: Stopped target network.target - Network. Dec 13 13:18:28.649743 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:18:28.649804 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:18:28.651707 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:18:28.651749 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:18:28.652954 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:18:28.652991 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:18:28.654580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:18:28.654617 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:18:28.655530 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:18:28.656841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:18:28.664974 systemd-networkd[763]: eth0: DHCPv6 lease lost Dec 13 13:18:28.666441 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:18:28.666555 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:18:28.668198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:18:28.668250 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:18:28.682031 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:18:28.682669 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:18:28.682721 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:18:28.684295 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:18:28.686756 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:18:28.686861 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:18:28.691077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:18:28.691125 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:18:28.692215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:18:28.692264 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:18:28.693828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:18:28.693876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:18:28.695482 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:18:28.695580 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:18:28.699563 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:18:28.699694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:18:28.702157 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:18:28.702218 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:18:28.703352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:18:28.703383 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:18:28.704856 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:18:28.704958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:18:28.707130 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:18:28.707172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:18:28.709243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:18:28.709291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:18:28.726097 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:18:28.727119 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:18:28.727183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:18:28.728997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:18:28.729047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:18:28.730976 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:18:28.731063 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:18:28.733667 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:18:28.733774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:18:28.734849 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:18:28.737721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:18:28.737783 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:18:28.740099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:18:28.749172 systemd[1]: Switching root. Dec 13 13:18:28.778292 systemd-journald[239]: Journal stopped Dec 13 13:18:29.414226 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Dec 13 13:18:29.414280 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:18:29.414295 kernel: SELinux: policy capability open_perms=1 Dec 13 13:18:29.414304 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:18:29.414313 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:18:29.414322 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:18:29.414331 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:18:29.414343 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:18:29.414352 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:18:29.414361 kernel: audit: type=1403 audit(1734095908.906:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:18:29.414371 systemd[1]: Successfully loaded SELinux policy in 28.704ms. Dec 13 13:18:29.414389 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.584ms. Dec 13 13:18:29.414403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:18:29.414413 systemd[1]: Detected virtualization kvm. Dec 13 13:18:29.414423 systemd[1]: Detected architecture arm64. Dec 13 13:18:29.414433 systemd[1]: Detected first boot. Dec 13 13:18:29.414442 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:18:29.414452 zram_generator::config[1042]: No configuration found. Dec 13 13:18:29.414462 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:18:29.414473 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:18:29.414483 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:18:29.414493 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:18:29.414504 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:18:29.414515 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:18:29.414524 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:18:29.414534 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:18:29.414544 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:18:29.414555 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:18:29.414566 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:18:29.414576 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:18:29.414586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:18:29.414596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:18:29.414606 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:18:29.414616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:18:29.414626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:18:29.414637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:18:29.414647 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:18:29.414658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:18:29.414668 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:18:29.414678 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:18:29.414688 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:18:29.414697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:18:29.414707 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:18:29.414718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:18:29.414729 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:18:29.414739 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:18:29.414749 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:18:29.414765 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:18:29.414780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:18:29.414790 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:18:29.414800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:18:29.414815 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:18:29.414825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:18:29.414836 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:18:29.414849 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:18:29.414860 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:18:29.414869 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:18:29.414881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:18:29.414892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:18:29.414918 systemd[1]: Reached target machines.target - Containers. Dec 13 13:18:29.414929 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:18:29.414939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:18:29.414951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:18:29.414961 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:18:29.414972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:18:29.414982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:18:29.414992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:18:29.415001 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:18:29.415011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:18:29.415021 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:18:29.415032 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:18:29.415048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:18:29.415058 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:18:29.415068 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:18:29.415077 kernel: fuse: init (API version 7.39) Dec 13 13:18:29.415086 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:18:29.415097 kernel: loop: module loaded Dec 13 13:18:29.415106 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:18:29.415116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:18:29.415127 kernel: ACPI: bus type drm_connector registered Dec 13 13:18:29.415137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:18:29.415147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:18:29.415156 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:18:29.415166 systemd[1]: Stopped verity-setup.service. Dec 13 13:18:29.415176 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:18:29.415205 systemd-journald[1110]: Collecting audit messages is disabled. Dec 13 13:18:29.415226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:18:29.415238 systemd-journald[1110]: Journal started Dec 13 13:18:29.415262 systemd-journald[1110]: Runtime Journal (/run/log/journal/b058935bee6440d39224dc51c45b7bd7) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:18:29.252252 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:18:29.266801 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:18:29.267155 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:18:29.416994 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:18:29.417382 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:18:29.418199 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:18:29.419066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:18:29.420012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:18:29.420947 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:18:29.422380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:18:29.423827 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:18:29.423969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:18:29.426244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:18:29.426380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:18:29.427414 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:18:29.427553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:18:29.428568 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:18:29.428710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:18:29.429841 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:18:29.429987 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:18:29.431270 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:18:29.431400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:18:29.432427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:18:29.433564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:18:29.434934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:18:29.446808 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:18:29.455002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:18:29.456657 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:18:29.457487 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:18:29.457522 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:18:29.459197 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:18:29.460962 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:18:29.462631 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:18:29.463542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:18:29.464849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:18:29.466444 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:18:29.467364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:18:29.471049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:18:29.471949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:18:29.473974 systemd-journald[1110]: Time spent on flushing to /var/log/journal/b058935bee6440d39224dc51c45b7bd7 is 10.987ms for 855 entries. Dec 13 13:18:29.473974 systemd-journald[1110]: System Journal (/var/log/journal/b058935bee6440d39224dc51c45b7bd7) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:18:29.490804 systemd-journald[1110]: Received client request to flush runtime journal. Dec 13 13:18:29.475125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:18:29.477807 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:18:29.487003 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:18:29.492281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:18:29.493502 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:18:29.494521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:18:29.495654 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:18:29.496988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:18:29.498195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:18:29.502439 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:18:29.511912 kernel: loop0: detected capacity change from 0 to 189592 Dec 13 13:18:29.515335 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:18:29.519919 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:18:29.523115 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:18:29.527315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:18:29.529094 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:18:29.537135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:18:29.538797 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:18:29.539950 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:18:29.541296 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:18:29.560155 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 13:18:29.560461 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 13:18:29.565923 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:18:29.566874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:18:29.613061 kernel: loop2: detected capacity change from 0 to 116784 Dec 13 13:18:29.642932 kernel: loop3: detected capacity change from 0 to 189592 Dec 13 13:18:29.648923 kernel: loop4: detected capacity change from 0 to 113552 Dec 13 13:18:29.653930 kernel: loop5: detected capacity change from 0 to 116784 Dec 13 13:18:29.657225 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:18:29.657579 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 13:18:29.661478 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:18:29.661650 systemd[1]: Reloading... Dec 13 13:18:29.716938 zram_generator::config[1210]: No configuration found. Dec 13 13:18:29.750705 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:18:29.807773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:18:29.847423 systemd[1]: Reloading finished in 185 ms. Dec 13 13:18:29.875221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:18:29.876573 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:18:29.887065 systemd[1]: Starting ensure-sysext.service... Dec 13 13:18:29.888791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:18:29.900095 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:18:29.900111 systemd[1]: Reloading... Dec 13 13:18:29.910114 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:18:29.910337 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:18:29.910997 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:18:29.911217 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 13:18:29.911269 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 13:18:29.913742 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:18:29.913756 systemd-tmpfiles[1241]: Skipping /boot Dec 13 13:18:29.921718 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:18:29.921737 systemd-tmpfiles[1241]: Skipping /boot Dec 13 13:18:29.950991 zram_generator::config[1271]: No configuration found. Dec 13 13:18:30.030573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:18:30.069429 systemd[1]: Reloading finished in 169 ms. Dec 13 13:18:30.083615 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:18:30.092437 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:18:30.098023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:18:30.100047 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:18:30.102080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:18:30.107148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:18:30.112975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:18:30.115629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:18:30.126359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:18:30.127704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:18:30.130993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:18:30.134703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:18:30.136481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:18:30.141540 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:18:30.143513 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:18:30.145221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:18:30.146202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:18:30.148082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:18:30.148195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:18:30.150242 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:18:30.150364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:18:30.154263 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Dec 13 13:18:30.161606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:18:30.168171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:18:30.172543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:18:30.177591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:18:30.181159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:18:30.182253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:18:30.183510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:18:30.186626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:18:30.188205 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:18:30.195095 augenrules[1353]: No rules Dec 13 13:18:30.193661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:18:30.194983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:18:30.195097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:18:30.196275 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:18:30.196400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:18:30.199693 systemd[1]: Finished ensure-sysext.service. Dec 13 13:18:30.203206 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:18:30.203367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:18:30.208497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:18:30.208630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:18:30.209833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:18:30.212598 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:18:30.213106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:18:30.219919 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:18:30.223879 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:18:30.223999 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1339) Dec 13 13:18:30.224026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1352) Dec 13 13:18:30.231923 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1339) Dec 13 13:18:30.238352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:18:30.239273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:18:30.239339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:18:30.241836 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:18:30.242971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:18:30.285971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:18:30.300085 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:18:30.306772 systemd-resolved[1307]: Positive Trust Anchors: Dec 13 13:18:30.306788 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:18:30.306820 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:18:30.319447 systemd-networkd[1380]: lo: Link UP Dec 13 13:18:30.319454 systemd-networkd[1380]: lo: Gained carrier Dec 13 13:18:30.320362 systemd-networkd[1380]: Enumeration completed Dec 13 13:18:30.322647 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:18:30.322696 systemd-resolved[1307]: Defaulting to hostname 'linux'. Dec 13 13:18:30.328232 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:18:30.328240 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:18:30.331164 systemd-networkd[1380]: eth0: Link UP Dec 13 13:18:30.331171 systemd-networkd[1380]: eth0: Gained carrier Dec 13 13:18:30.331184 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:18:30.336057 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:18:30.337194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:18:30.338092 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:18:30.340941 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:18:30.342406 systemd[1]: Reached target network.target - Network. Dec 13 13:18:30.343997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:18:30.345465 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:18:30.350980 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:18:30.351505 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Dec 13 13:18:30.352212 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:18:30.352254 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2024-12-13 13:18:30.743873 UTC. Dec 13 13:18:30.358176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:18:30.371968 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:18:30.384055 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:18:30.394282 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:18:30.398416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:18:30.424990 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:18:30.426385 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:18:30.428063 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:18:30.428892 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:18:30.429772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:18:30.431167 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:18:30.432008 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:18:30.432870 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:18:30.433785 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:18:30.433814 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:18:30.434458 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:18:30.435989 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:18:30.437937 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:18:30.445674 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:18:30.447577 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:18:30.448846 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:18:30.449732 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:18:30.450460 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:18:30.451153 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:18:30.451179 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:18:30.452004 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:18:30.453600 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:18:30.456015 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:18:30.456561 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:18:30.462024 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:18:30.466037 jq[1410]: false Dec 13 13:18:30.463338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:18:30.465117 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:18:30.467008 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:18:30.472336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:18:30.477074 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:18:30.479977 dbus-daemon[1409]: [system] SELinux support is enabled Dec 13 13:18:30.481146 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:18:30.484855 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:18:30.485255 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:18:30.485399 extend-filesystems[1411]: Found loop3 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found loop4 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found loop5 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda1 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda2 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda3 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found usr Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda4 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda6 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda7 Dec 13 13:18:30.488298 extend-filesystems[1411]: Found vda9 Dec 13 13:18:30.488298 extend-filesystems[1411]: Checking size of /dev/vda9 Dec 13 13:18:30.488076 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:18:30.492743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:18:30.505727 extend-filesystems[1411]: Resized partition /dev/vda9 Dec 13 13:18:30.504868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:18:30.508827 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:18:30.514845 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:18:30.510651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:18:30.515039 jq[1429]: true Dec 13 13:18:30.521855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1339) Dec 13 13:18:30.521385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:18:30.523943 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:18:30.524218 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:18:30.524360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:18:30.526855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:18:30.527342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:18:30.543935 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:18:30.564003 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:18:30.564003 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:18:30.564003 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:18:30.544155 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:18:30.574291 update_engine[1423]: I20241213 13:18:30.558192 1423 main.cc:92] Flatcar Update Engine starting Dec 13 13:18:30.574291 update_engine[1423]: I20241213 13:18:30.562038 1423 update_check_scheduler.cc:74] Next update check in 4m5s Dec 13 13:18:30.574512 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Dec 13 13:18:30.549241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:18:30.581059 jq[1435]: true Dec 13 13:18:30.581224 tar[1434]: linux-arm64/helm Dec 13 13:18:30.549282 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:18:30.550378 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:18:30.550398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:18:30.561992 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:18:30.563203 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:18:30.564813 systemd-logind[1419]: New seat seat0. Dec 13 13:18:30.570058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:18:30.573117 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:18:30.574941 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:18:30.575131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:18:30.625468 locksmithd[1449]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:18:30.628852 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:18:30.629704 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:18:30.633377 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:18:30.747889 containerd[1436]: time="2024-12-13T13:18:30.747805000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:18:30.772803 containerd[1436]: time="2024-12-13T13:18:30.772611120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.774933 containerd[1436]: time="2024-12-13T13:18:30.774549040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:18:30.774933 containerd[1436]: time="2024-12-13T13:18:30.774585600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:18:30.774933 containerd[1436]: time="2024-12-13T13:18:30.774602560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:18:30.774933 containerd[1436]: time="2024-12-13T13:18:30.774808920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:18:30.774933 containerd[1436]: time="2024-12-13T13:18:30.774876440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775068 containerd[1436]: time="2024-12-13T13:18:30.774955440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775068 containerd[1436]: time="2024-12-13T13:18:30.774969920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775263 containerd[1436]: time="2024-12-13T13:18:30.775227680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775263 containerd[1436]: time="2024-12-13T13:18:30.775254040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775300 containerd[1436]: time="2024-12-13T13:18:30.775268240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775300 containerd[1436]: time="2024-12-13T13:18:30.775277520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775379 containerd[1436]: time="2024-12-13T13:18:30.775357000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775567 containerd[1436]: time="2024-12-13T13:18:30.775545400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775666 containerd[1436]: time="2024-12-13T13:18:30.775645480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:18:30.775666 containerd[1436]: time="2024-12-13T13:18:30.775663160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:18:30.775752 containerd[1436]: time="2024-12-13T13:18:30.775738720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:18:30.775810 containerd[1436]: time="2024-12-13T13:18:30.775798160Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:18:30.779195 containerd[1436]: time="2024-12-13T13:18:30.779167080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:18:30.779243 containerd[1436]: time="2024-12-13T13:18:30.779214640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:18:30.779243 containerd[1436]: time="2024-12-13T13:18:30.779228960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:18:30.779276 containerd[1436]: time="2024-12-13T13:18:30.779242080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:18:30.779276 containerd[1436]: time="2024-12-13T13:18:30.779255240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:18:30.779404 containerd[1436]: time="2024-12-13T13:18:30.779385640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:18:30.779617 containerd[1436]: time="2024-12-13T13:18:30.779601680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:18:30.779710 containerd[1436]: time="2024-12-13T13:18:30.779695960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:18:30.779729 containerd[1436]: time="2024-12-13T13:18:30.779716040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:18:30.779746 containerd[1436]: time="2024-12-13T13:18:30.779731240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:18:30.779770 containerd[1436]: time="2024-12-13T13:18:30.779745120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779789 containerd[1436]: time="2024-12-13T13:18:30.779767200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779789 containerd[1436]: time="2024-12-13T13:18:30.779781920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779825 containerd[1436]: time="2024-12-13T13:18:30.779799640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779825 containerd[1436]: time="2024-12-13T13:18:30.779814040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779870 containerd[1436]: time="2024-12-13T13:18:30.779825240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779870 containerd[1436]: time="2024-12-13T13:18:30.779836560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779870 containerd[1436]: time="2024-12-13T13:18:30.779847720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:18:30.779870 containerd[1436]: time="2024-12-13T13:18:30.779865480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.779951 containerd[1436]: time="2024-12-13T13:18:30.779878960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.779951 containerd[1436]: time="2024-12-13T13:18:30.779890840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.779951 containerd[1436]: time="2024-12-13T13:18:30.779919520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.779951 containerd[1436]: time="2024-12-13T13:18:30.779932400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.779951 containerd[1436]: time="2024-12-13T13:18:30.779944000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.779954600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.779966840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.779979240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.779992840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.780004000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.780016040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780035 containerd[1436]: time="2024-12-13T13:18:30.780027440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780144 containerd[1436]: time="2024-12-13T13:18:30.780041600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:18:30.780144 containerd[1436]: time="2024-12-13T13:18:30.780060920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780144 containerd[1436]: time="2024-12-13T13:18:30.780073000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780144 containerd[1436]: time="2024-12-13T13:18:30.780083920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:18:30.780260 containerd[1436]: time="2024-12-13T13:18:30.780244640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:18:30.780280 containerd[1436]: time="2024-12-13T13:18:30.780267760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:18:30.780300 containerd[1436]: time="2024-12-13T13:18:30.780277600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:18:30.780300 containerd[1436]: time="2024-12-13T13:18:30.780288960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:18:30.780300 containerd[1436]: time="2024-12-13T13:18:30.780297840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780347 containerd[1436]: time="2024-12-13T13:18:30.780309280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:18:30.780347 containerd[1436]: time="2024-12-13T13:18:30.780318440Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:18:30.780347 containerd[1436]: time="2024-12-13T13:18:30.780330000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:18:30.780694 containerd[1436]: time="2024-12-13T13:18:30.780655520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:18:30.780798 containerd[1436]: time="2024-12-13T13:18:30.780702400Z" level=info msg="Connect containerd service" Dec 13 13:18:30.780798 containerd[1436]: time="2024-12-13T13:18:30.780733560Z" level=info msg="using legacy CRI server" Dec 13 13:18:30.780798 containerd[1436]: time="2024-12-13T13:18:30.780739880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:18:30.780992 containerd[1436]: time="2024-12-13T13:18:30.780977200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:18:30.781570 containerd[1436]: time="2024-12-13T13:18:30.781545800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:18:30.782208 containerd[1436]: time="2024-12-13T13:18:30.782175720Z" level=info msg="Start subscribing containerd event" Dec 13 13:18:30.782362 containerd[1436]: time="2024-12-13T13:18:30.782312480Z" level=info msg="Start recovering state" Dec 13 13:18:30.782420 containerd[1436]: time="2024-12-13T13:18:30.782392560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:18:30.782535 containerd[1436]: time="2024-12-13T13:18:30.782490080Z" level=info msg="Start event monitor" Dec 13 13:18:30.782535 containerd[1436]: time="2024-12-13T13:18:30.782509040Z" level=info msg="Start snapshots syncer" Dec 13 13:18:30.782535 containerd[1436]: time="2024-12-13T13:18:30.782515040Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:18:30.782612 containerd[1436]: time="2024-12-13T13:18:30.782518760Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:18:30.782612 containerd[1436]: time="2024-12-13T13:18:30.782559400Z" level=info msg="Start streaming server" Dec 13 13:18:30.785984 containerd[1436]: time="2024-12-13T13:18:30.782912920Z" level=info msg="containerd successfully booted in 0.037139s" Dec 13 13:18:30.784940 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:18:30.924103 tar[1434]: linux-arm64/LICENSE Dec 13 13:18:30.924103 tar[1434]: linux-arm64/README.md Dec 13 13:18:30.942936 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:18:31.713271 systemd-networkd[1380]: eth0: Gained IPv6LL Dec 13 13:18:31.717048 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:18:31.718872 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:18:31.727547 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:18:31.729885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:31.731896 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:18:31.752985 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:18:31.754188 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:18:31.754342 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:18:31.756336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:18:32.113705 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:18:32.133043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:18:32.146275 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:18:32.151939 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:18:32.152838 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:18:32.159014 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:18:32.174713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:18:32.187293 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:18:32.189734 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:18:32.190977 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:18:32.247841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:32.249414 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:18:32.251913 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:18:32.254025 systemd[1]: Startup finished in 540ms (kernel) + 4.213s (initrd) + 3.376s (userspace) = 8.130s. Dec 13 13:18:32.263735 agetty[1514]: failed to open credentials directory Dec 13 13:18:32.263812 agetty[1515]: failed to open credentials directory Dec 13 13:18:32.684465 kubelet[1521]: E1213 13:18:32.684408 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:18:32.686702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:18:32.686848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:18:37.495637 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:18:37.496734 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:52676.service - OpenSSH per-connection server daemon (10.0.0.1:52676). Dec 13 13:18:37.557699 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 52676 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:18:37.559498 sshd-session[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:18:37.567845 systemd-logind[1419]: New session 1 of user core. Dec 13 13:18:37.568863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:18:37.582181 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:18:37.591383 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:18:37.593638 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:18:37.602169 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:18:37.688416 systemd[1538]: Queued start job for default target default.target. Dec 13 13:18:37.699809 systemd[1538]: Created slice app.slice - User Application Slice. Dec 13 13:18:37.699853 systemd[1538]: Reached target paths.target - Paths. Dec 13 13:18:37.699865 systemd[1538]: Reached target timers.target - Timers. Dec 13 13:18:37.701070 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:18:37.710580 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:18:37.710639 systemd[1538]: Reached target sockets.target - Sockets. Dec 13 13:18:37.710650 systemd[1538]: Reached target basic.target - Basic System. Dec 13 13:18:37.710684 systemd[1538]: Reached target default.target - Main User Target. Dec 13 13:18:37.710713 systemd[1538]: Startup finished in 103ms. Dec 13 13:18:37.710971 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:18:37.712224 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:18:37.778677 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:52686.service - OpenSSH per-connection server daemon (10.0.0.1:52686). Dec 13 13:18:37.817972 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 52686 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:18:37.819257 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:18:37.824331 systemd-logind[1419]: New session 2 of user core. Dec 13 13:18:37.831081 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:18:37.883701 sshd[1551]: Connection closed by 10.0.0.1 port 52686 Dec 13 13:18:37.884295 sshd-session[1549]: pam_unix(sshd:session): session closed for user core Dec 13 13:18:37.899343 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:52686.service: Deactivated successfully. Dec 13 13:18:37.900837 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:18:37.902266 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:18:37.904210 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:52696.service - OpenSSH per-connection server daemon (10.0.0.1:52696). Dec 13 13:18:37.904889 systemd-logind[1419]: Removed session 2. Dec 13 13:18:37.945256 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 52696 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:18:37.946439 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:18:37.949967 systemd-logind[1419]: New session 3 of user core. Dec 13 13:18:37.957072 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:18:38.005593 sshd[1558]: Connection closed by 10.0.0.1 port 52696 Dec 13 13:18:38.006127 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Dec 13 13:18:38.019473 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:52696.service: Deactivated successfully. Dec 13 13:18:38.020850 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:18:38.022076 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:18:38.036208 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:52706.service - OpenSSH per-connection server daemon (10.0.0.1:52706). Dec 13 13:18:38.037052 systemd-logind[1419]: Removed session 3. Dec 13 13:18:38.073401 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 52706 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:18:38.074834 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:18:38.078289 systemd-logind[1419]: New session 4 of user core. Dec 13 13:18:38.086055 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:18:38.137082 sshd[1565]: Connection closed by 10.0.0.1 port 52706 Dec 13 13:18:38.137379 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Dec 13 13:18:38.152078 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:52706.service: Deactivated successfully. Dec 13 13:18:38.153386 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:18:38.154605 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:18:38.155687 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:52722.service - OpenSSH per-connection server daemon (10.0.0.1:52722). Dec 13 13:18:38.156496 systemd-logind[1419]: Removed session 4. Dec 13 13:18:38.194142 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 52722 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:18:38.195203 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:18:38.198823 systemd-logind[1419]: New session 5 of user core. Dec 13 13:18:38.205046 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:18:38.273768 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:18:38.274357 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:18:38.610136 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:18:38.610267 (dockerd)[1593]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:18:38.861039 dockerd[1593]: time="2024-12-13T13:18:38.860832497Z" level=info msg="Starting up" Dec 13 13:18:39.031948 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3845894413-merged.mount: Deactivated successfully. Dec 13 13:18:39.046065 dockerd[1593]: time="2024-12-13T13:18:39.046030440Z" level=info msg="Loading containers: start." Dec 13 13:18:39.188932 kernel: Initializing XFRM netlink socket Dec 13 13:18:39.246405 systemd-networkd[1380]: docker0: Link UP Dec 13 13:18:39.276064 dockerd[1593]: time="2024-12-13T13:18:39.276034275Z" level=info msg="Loading containers: done." Dec 13 13:18:39.295784 dockerd[1593]: time="2024-12-13T13:18:39.295741815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:18:39.295906 dockerd[1593]: time="2024-12-13T13:18:39.295825480Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:18:39.296043 dockerd[1593]: time="2024-12-13T13:18:39.296013104Z" level=info msg="Daemon has completed initialization" Dec 13 13:18:39.321247 dockerd[1593]: time="2024-12-13T13:18:39.321201782Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:18:39.321360 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:18:39.935445 containerd[1436]: time="2024-12-13T13:18:39.935393700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 13:18:40.589419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133359794.mount: Deactivated successfully. Dec 13 13:18:41.664722 containerd[1436]: time="2024-12-13T13:18:41.664675118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:41.665144 containerd[1436]: time="2024-12-13T13:18:41.665098178Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Dec 13 13:18:41.665934 containerd[1436]: time="2024-12-13T13:18:41.665893858Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:41.669556 containerd[1436]: time="2024-12-13T13:18:41.669496598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:41.670565 containerd[1436]: time="2024-12-13T13:18:41.670528076Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.735088976s" Dec 13 13:18:41.670565 containerd[1436]: time="2024-12-13T13:18:41.670565918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 13:18:41.671220 containerd[1436]: time="2024-12-13T13:18:41.671197915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 13:18:42.793415 containerd[1436]: time="2024-12-13T13:18:42.793364381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:42.793908 containerd[1436]: time="2024-12-13T13:18:42.793860308Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Dec 13 13:18:42.794638 containerd[1436]: time="2024-12-13T13:18:42.794615890Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:42.797378 containerd[1436]: time="2024-12-13T13:18:42.797327393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:42.800442 containerd[1436]: time="2024-12-13T13:18:42.799416559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.128185559s" Dec 13 13:18:42.800442 containerd[1436]: time="2024-12-13T13:18:42.799457333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 13:18:42.801087 containerd[1436]: time="2024-12-13T13:18:42.800927237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 13:18:42.937156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:18:42.947098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:43.043038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:43.046747 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:18:43.083926 kubelet[1858]: E1213 13:18:43.083826 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:18:43.087054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:18:43.087186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:18:44.021171 containerd[1436]: time="2024-12-13T13:18:44.021100747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:44.021833 containerd[1436]: time="2024-12-13T13:18:44.021769912Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Dec 13 13:18:44.022421 containerd[1436]: time="2024-12-13T13:18:44.022389090Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:44.025174 containerd[1436]: time="2024-12-13T13:18:44.025137889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:44.026412 containerd[1436]: time="2024-12-13T13:18:44.026385927Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.225428051s" Dec 13 13:18:44.026446 containerd[1436]: time="2024-12-13T13:18:44.026414734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 13:18:44.026875 containerd[1436]: time="2024-12-13T13:18:44.026844208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 13:18:44.968220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261782740.mount: Deactivated successfully. Dec 13 13:18:45.184509 containerd[1436]: time="2024-12-13T13:18:45.184459680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:45.185376 containerd[1436]: time="2024-12-13T13:18:45.185210749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Dec 13 13:18:45.185995 containerd[1436]: time="2024-12-13T13:18:45.185959642Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:45.187873 containerd[1436]: time="2024-12-13T13:18:45.187844246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:45.188672 containerd[1436]: time="2024-12-13T13:18:45.188638277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.161753292s" Dec 13 13:18:45.188725 containerd[1436]: time="2024-12-13T13:18:45.188680231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 13:18:45.189265 containerd[1436]: time="2024-12-13T13:18:45.189243614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:18:45.856033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506300939.mount: Deactivated successfully. Dec 13 13:18:46.461349 containerd[1436]: time="2024-12-13T13:18:46.461186562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.462311 containerd[1436]: time="2024-12-13T13:18:46.462229206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 13:18:46.462975 containerd[1436]: time="2024-12-13T13:18:46.462940273Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.466955 containerd[1436]: time="2024-12-13T13:18:46.466629126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.468536 containerd[1436]: time="2024-12-13T13:18:46.468469365Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.279188321s" Dec 13 13:18:46.468536 containerd[1436]: time="2024-12-13T13:18:46.468518850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:18:46.469139 containerd[1436]: time="2024-12-13T13:18:46.469107716Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 13:18:46.867810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873216978.mount: Deactivated successfully. Dec 13 13:18:46.871093 containerd[1436]: time="2024-12-13T13:18:46.871056549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.871747 containerd[1436]: time="2024-12-13T13:18:46.871708589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 13 13:18:46.872429 containerd[1436]: time="2024-12-13T13:18:46.872359541Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.874550 containerd[1436]: time="2024-12-13T13:18:46.874486462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:46.875331 containerd[1436]: time="2024-12-13T13:18:46.875181102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 406.040933ms" Dec 13 13:18:46.875331 containerd[1436]: time="2024-12-13T13:18:46.875214038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 13:18:46.875891 containerd[1436]: time="2024-12-13T13:18:46.875736548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 13:18:47.381297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206367260.mount: Deactivated successfully. Dec 13 13:18:48.895357 containerd[1436]: time="2024-12-13T13:18:48.895300569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:48.896040 containerd[1436]: time="2024-12-13T13:18:48.895998803Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Dec 13 13:18:48.896953 containerd[1436]: time="2024-12-13T13:18:48.896902673Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:48.899967 containerd[1436]: time="2024-12-13T13:18:48.899934332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:18:48.901750 containerd[1436]: time="2024-12-13T13:18:48.901710994Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.025944904s" Dec 13 13:18:48.901787 containerd[1436]: time="2024-12-13T13:18:48.901752081Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 13:18:53.092021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:18:53.105072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:53.193695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:53.197123 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:18:53.229834 kubelet[2011]: E1213 13:18:53.229782 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:18:53.232253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:18:53.232392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:18:54.207959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:54.215191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:54.233333 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit session-5.scope)... Dec 13 13:18:54.233349 systemd[1]: Reloading... Dec 13 13:18:54.296024 zram_generator::config[2069]: No configuration found. Dec 13 13:18:54.410553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:18:54.465659 systemd[1]: Reloading finished in 232 ms. Dec 13 13:18:54.507075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:54.508510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:54.511063 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:18:54.511248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:54.512574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:18:54.606955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:18:54.610280 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:18:54.644685 kubelet[2113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:18:54.644685 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:18:54.644685 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:18:54.644685 kubelet[2113]: I1213 13:18:54.644441 2113 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:18:55.063939 kubelet[2113]: I1213 13:18:55.063191 2113 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:18:55.063939 kubelet[2113]: I1213 13:18:55.063224 2113 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:18:55.063939 kubelet[2113]: I1213 13:18:55.063458 2113 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:18:55.089961 kubelet[2113]: E1213 13:18:55.089892 2113 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:55.090745 kubelet[2113]: I1213 13:18:55.090717 2113 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:18:55.097124 kubelet[2113]: E1213 13:18:55.097085 2113 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:18:55.097124 kubelet[2113]: I1213 13:18:55.097113 2113 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:18:55.100057 kubelet[2113]: I1213 13:18:55.100025 2113 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:18:55.100858 kubelet[2113]: I1213 13:18:55.100830 2113 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:18:55.101012 kubelet[2113]: I1213 13:18:55.100983 2113 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:18:55.101190 kubelet[2113]: I1213 13:18:55.101009 2113 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:18:55.101334 kubelet[2113]: I1213 13:18:55.101317 2113 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:18:55.101334 kubelet[2113]: I1213 13:18:55.101328 2113 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:18:55.101520 kubelet[2113]: I1213 13:18:55.101501 2113 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:18:55.103218 kubelet[2113]: I1213 13:18:55.103189 2113 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:18:55.103247 kubelet[2113]: I1213 13:18:55.103218 2113 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:18:55.103247 kubelet[2113]: I1213 13:18:55.103239 2113 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:18:55.103290 kubelet[2113]: I1213 13:18:55.103249 2113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:18:55.106802 kubelet[2113]: W1213 13:18:55.106505 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:55.106802 kubelet[2113]: E1213 13:18:55.106560 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:55.107056 kubelet[2113]: I1213 13:18:55.107034 2113 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:18:55.107776 kubelet[2113]: W1213 13:18:55.107700 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:55.107776 kubelet[2113]: E1213 13:18:55.107746 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:55.108716 kubelet[2113]: I1213 13:18:55.108695 2113 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:18:55.109334 kubelet[2113]: W1213 13:18:55.109310 2113 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:18:55.110111 kubelet[2113]: I1213 13:18:55.110008 2113 server.go:1269] "Started kubelet" Dec 13 13:18:55.110835 kubelet[2113]: I1213 13:18:55.110757 2113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:18:55.111279 kubelet[2113]: I1213 13:18:55.110929 2113 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:18:55.111279 kubelet[2113]: I1213 13:18:55.111050 2113 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:18:55.111731 kubelet[2113]: I1213 13:18:55.111526 2113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:18:55.112365 kubelet[2113]: I1213 13:18:55.112282 2113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:18:55.112365 kubelet[2113]: I1213 13:18:55.112304 2113 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:18:55.113594 kubelet[2113]: I1213 13:18:55.113565 2113 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:18:55.113676 kubelet[2113]: I1213 13:18:55.113661 2113 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:18:55.113963 kubelet[2113]: I1213 13:18:55.113730 2113 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:18:55.114096 kubelet[2113]: W1213 13:18:55.114052 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:55.114141 kubelet[2113]: E1213 13:18:55.114102 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:55.114300 kubelet[2113]: I1213 13:18:55.114278 2113 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:18:55.114366 kubelet[2113]: I1213 13:18:55.114349 2113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:18:55.114649 kubelet[2113]: E1213 13:18:55.114619 2113 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:18:55.114728 kubelet[2113]: E1213 13:18:55.114704 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Dec 13 13:18:55.116813 kubelet[2113]: E1213 13:18:55.115742 2113 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:18:55.116813 kubelet[2113]: I1213 13:18:55.115821 2113 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:18:55.119529 kubelet[2113]: E1213 13:18:55.113304 2113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf10baadeb34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:18:55.109983028 +0000 UTC m=+0.496788008,LastTimestamp:2024-12-13 13:18:55.109983028 +0000 UTC m=+0.496788008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:18:55.127433 kubelet[2113]: I1213 13:18:55.127401 2113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:18:55.129004 kubelet[2113]: I1213 13:18:55.128972 2113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:18:55.129004 kubelet[2113]: I1213 13:18:55.129000 2113 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:18:55.129087 kubelet[2113]: I1213 13:18:55.129015 2113 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:18:55.129087 kubelet[2113]: E1213 13:18:55.129057 2113 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:18:55.130655 kubelet[2113]: W1213 13:18:55.130575 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:55.130655 kubelet[2113]: E1213 13:18:55.130624 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:55.132321 kubelet[2113]: I1213 13:18:55.132280 2113 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:18:55.132321 kubelet[2113]: I1213 13:18:55.132293 2113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:18:55.132321 kubelet[2113]: I1213 13:18:55.132309 2113 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:18:55.194168 kubelet[2113]: I1213 13:18:55.194122 2113 policy_none.go:49] "None policy: Start" Dec 13 13:18:55.194950 kubelet[2113]: I1213 13:18:55.194936 2113 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:18:55.194986 kubelet[2113]: I1213 13:18:55.194960 2113 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:18:55.200084 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:18:55.212218 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:18:55.214718 kubelet[2113]: E1213 13:18:55.214680 2113 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:18:55.214891 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:18:55.227647 kubelet[2113]: I1213 13:18:55.227598 2113 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:18:55.227921 kubelet[2113]: I1213 13:18:55.227800 2113 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:18:55.227921 kubelet[2113]: I1213 13:18:55.227819 2113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:18:55.228320 kubelet[2113]: I1213 13:18:55.228288 2113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:18:55.230566 kubelet[2113]: E1213 13:18:55.230480 2113 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:18:55.236362 systemd[1]: Created slice kubepods-burstable-pod833858ca13ba0084a9f41df4c8fb8061.slice - libcontainer container kubepods-burstable-pod833858ca13ba0084a9f41df4c8fb8061.slice. Dec 13 13:18:55.258128 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 13:18:55.272258 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 13:18:55.315271 kubelet[2113]: E1213 13:18:55.315156 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Dec 13 13:18:55.329206 kubelet[2113]: I1213 13:18:55.329176 2113 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:18:55.329557 kubelet[2113]: E1213 13:18:55.329535 2113 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Dec 13 13:18:55.415059 kubelet[2113]: I1213 13:18:55.414981 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:18:55.415059 kubelet[2113]: I1213 13:18:55.415051 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:18:55.415193 kubelet[2113]: I1213 13:18:55.415087 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:18:55.415193 kubelet[2113]: I1213 13:18:55.415106 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:18:55.415193 kubelet[2113]: I1213 13:18:55.415121 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:18:55.415193 kubelet[2113]: I1213 13:18:55.415137 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:18:55.415193 kubelet[2113]: I1213 13:18:55.415151 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:18:55.415290 kubelet[2113]: I1213 13:18:55.415165 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:18:55.415290 kubelet[2113]: I1213 13:18:55.415178 2113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:18:55.531795 kubelet[2113]: I1213 13:18:55.531433 2113 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:18:55.531795 kubelet[2113]: E1213 13:18:55.531755 2113 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Dec 13 13:18:55.556241 kubelet[2113]: E1213 13:18:55.556210 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:55.556983 containerd[1436]: time="2024-12-13T13:18:55.556939399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:833858ca13ba0084a9f41df4c8fb8061,Namespace:kube-system,Attempt:0,}" Dec 13 13:18:55.560011 kubelet[2113]: E1213 13:18:55.559979 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:55.560601 containerd[1436]: time="2024-12-13T13:18:55.560556971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 13:18:55.575001 kubelet[2113]: E1213 13:18:55.574859 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:55.575414 containerd[1436]: time="2024-12-13T13:18:55.575370180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 13:18:55.716221 kubelet[2113]: E1213 13:18:55.716163 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Dec 13 13:18:55.933589 kubelet[2113]: I1213 13:18:55.933554 2113 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:18:55.933907 kubelet[2113]: E1213 13:18:55.933870 2113 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Dec 13 13:18:55.956387 kubelet[2113]: W1213 13:18:55.956351 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:55.956442 kubelet[2113]: E1213 13:18:55.956395 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:56.029532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991184175.mount: Deactivated successfully. Dec 13 13:18:56.034205 containerd[1436]: time="2024-12-13T13:18:56.034147331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:18:56.036096 containerd[1436]: time="2024-12-13T13:18:56.036062173Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:18:56.037890 containerd[1436]: time="2024-12-13T13:18:56.037741487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:18:56.038501 containerd[1436]: time="2024-12-13T13:18:56.038471714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:18:56.040132 containerd[1436]: time="2024-12-13T13:18:56.040072532Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:18:56.042269 containerd[1436]: time="2024-12-13T13:18:56.042221821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:18:56.042530 containerd[1436]: time="2024-12-13T13:18:56.042489766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:18:56.043873 containerd[1436]: time="2024-12-13T13:18:56.043837665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.209114ms" Dec 13 13:18:56.044608 containerd[1436]: time="2024-12-13T13:18:56.044571619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:18:56.045617 containerd[1436]: time="2024-12-13T13:18:56.045589906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.133357ms" Dec 13 13:18:56.054181 containerd[1436]: time="2024-12-13T13:18:56.054146433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.124272ms" Dec 13 13:18:56.201713 containerd[1436]: time="2024-12-13T13:18:56.201423588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:18:56.201713 containerd[1436]: time="2024-12-13T13:18:56.201607106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:18:56.201713 containerd[1436]: time="2024-12-13T13:18:56.201625177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.201869 containerd[1436]: time="2024-12-13T13:18:56.201776159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.202383 containerd[1436]: time="2024-12-13T13:18:56.202250462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:18:56.202383 containerd[1436]: time="2024-12-13T13:18:56.202307842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:18:56.202383 containerd[1436]: time="2024-12-13T13:18:56.202324831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.202553 containerd[1436]: time="2024-12-13T13:18:56.202483146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.203157 containerd[1436]: time="2024-12-13T13:18:56.203047125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:18:56.203157 containerd[1436]: time="2024-12-13T13:18:56.203107750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:18:56.203270 containerd[1436]: time="2024-12-13T13:18:56.203122415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.203405 containerd[1436]: time="2024-12-13T13:18:56.203253403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:18:56.231112 systemd[1]: Started cri-containerd-337112741032db5db7b1c2f0abf34f62df176332f892ccf1339252feb4b2a3c6.scope - libcontainer container 337112741032db5db7b1c2f0abf34f62df176332f892ccf1339252feb4b2a3c6. Dec 13 13:18:56.232505 systemd[1]: Started cri-containerd-3a9f41526c3a2c09f2ffd7f226c77efb3fa909c7ec27f382f51229e9a42b1693.scope - libcontainer container 3a9f41526c3a2c09f2ffd7f226c77efb3fa909c7ec27f382f51229e9a42b1693. Dec 13 13:18:56.234659 systemd[1]: Started cri-containerd-b79906a7d589a9dff3b64eb44af4f275a639ce108ab73d926c37302878d97aa8.scope - libcontainer container b79906a7d589a9dff3b64eb44af4f275a639ce108ab73d926c37302878d97aa8. Dec 13 13:18:56.265036 containerd[1436]: time="2024-12-13T13:18:56.264998102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"337112741032db5db7b1c2f0abf34f62df176332f892ccf1339252feb4b2a3c6\"" Dec 13 13:18:56.269728 kubelet[2113]: E1213 13:18:56.267451 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:56.271978 containerd[1436]: time="2024-12-13T13:18:56.271857564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"b79906a7d589a9dff3b64eb44af4f275a639ce108ab73d926c37302878d97aa8\"" Dec 13 13:18:56.273150 kubelet[2113]: E1213 13:18:56.273120 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:56.273639 containerd[1436]: time="2024-12-13T13:18:56.273601350Z" level=info msg="CreateContainer within sandbox \"337112741032db5db7b1c2f0abf34f62df176332f892ccf1339252feb4b2a3c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:18:56.275066 containerd[1436]: time="2024-12-13T13:18:56.275035559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:833858ca13ba0084a9f41df4c8fb8061,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a9f41526c3a2c09f2ffd7f226c77efb3fa909c7ec27f382f51229e9a42b1693\"" Dec 13 13:18:56.275728 containerd[1436]: time="2024-12-13T13:18:56.275699351Z" level=info msg="CreateContainer within sandbox \"b79906a7d589a9dff3b64eb44af4f275a639ce108ab73d926c37302878d97aa8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:18:56.276031 kubelet[2113]: E1213 13:18:56.276008 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:56.278241 containerd[1436]: time="2024-12-13T13:18:56.278205619Z" level=info msg="CreateContainer within sandbox \"3a9f41526c3a2c09f2ffd7f226c77efb3fa909c7ec27f382f51229e9a42b1693\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:18:56.291223 containerd[1436]: time="2024-12-13T13:18:56.291161460Z" level=info msg="CreateContainer within sandbox \"337112741032db5db7b1c2f0abf34f62df176332f892ccf1339252feb4b2a3c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a4fde0ae4b8979f40c407d4b15068e7761ffff5be065f8532482c3e7fcd1f19\"" Dec 13 13:18:56.291953 containerd[1436]: time="2024-12-13T13:18:56.291920938Z" level=info msg="StartContainer for \"3a4fde0ae4b8979f40c407d4b15068e7761ffff5be065f8532482c3e7fcd1f19\"" Dec 13 13:18:56.295503 containerd[1436]: time="2024-12-13T13:18:56.295471098Z" level=info msg="CreateContainer within sandbox \"b79906a7d589a9dff3b64eb44af4f275a639ce108ab73d926c37302878d97aa8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78909bc408710b9ef0258e954fd20ac042542b073cf5f65c79dbe77692ad33eb\"" Dec 13 13:18:56.295981 containerd[1436]: time="2024-12-13T13:18:56.295847191Z" level=info msg="StartContainer for \"78909bc408710b9ef0258e954fd20ac042542b073cf5f65c79dbe77692ad33eb\"" Dec 13 13:18:56.298853 containerd[1436]: time="2024-12-13T13:18:56.298757802Z" level=info msg="CreateContainer within sandbox \"3a9f41526c3a2c09f2ffd7f226c77efb3fa909c7ec27f382f51229e9a42b1693\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf9ceacb00035920fbc0c6ce582c0abb06373f49ffdd8ccc4da9cbcb1d95d33e\"" Dec 13 13:18:56.299741 containerd[1436]: time="2024-12-13T13:18:56.299717186Z" level=info msg="StartContainer for \"cf9ceacb00035920fbc0c6ce582c0abb06373f49ffdd8ccc4da9cbcb1d95d33e\"" Dec 13 13:18:56.320142 systemd[1]: Started cri-containerd-3a4fde0ae4b8979f40c407d4b15068e7761ffff5be065f8532482c3e7fcd1f19.scope - libcontainer container 3a4fde0ae4b8979f40c407d4b15068e7761ffff5be065f8532482c3e7fcd1f19. Dec 13 13:18:56.332140 systemd[1]: Started cri-containerd-78909bc408710b9ef0258e954fd20ac042542b073cf5f65c79dbe77692ad33eb.scope - libcontainer container 78909bc408710b9ef0258e954fd20ac042542b073cf5f65c79dbe77692ad33eb. Dec 13 13:18:56.333471 systemd[1]: Started cri-containerd-cf9ceacb00035920fbc0c6ce582c0abb06373f49ffdd8ccc4da9cbcb1d95d33e.scope - libcontainer container cf9ceacb00035920fbc0c6ce582c0abb06373f49ffdd8ccc4da9cbcb1d95d33e. Dec 13 13:18:56.365471 containerd[1436]: time="2024-12-13T13:18:56.365411419Z" level=info msg="StartContainer for \"3a4fde0ae4b8979f40c407d4b15068e7761ffff5be065f8532482c3e7fcd1f19\" returns successfully" Dec 13 13:18:56.374694 containerd[1436]: time="2024-12-13T13:18:56.374642757Z" level=info msg="StartContainer for \"78909bc408710b9ef0258e954fd20ac042542b073cf5f65c79dbe77692ad33eb\" returns successfully" Dec 13 13:18:56.393139 containerd[1436]: time="2024-12-13T13:18:56.390978623Z" level=info msg="StartContainer for \"cf9ceacb00035920fbc0c6ce582c0abb06373f49ffdd8ccc4da9cbcb1d95d33e\" returns successfully" Dec 13 13:18:56.425265 kubelet[2113]: W1213 13:18:56.425149 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:56.425265 kubelet[2113]: E1213 13:18:56.425225 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:56.433946 kubelet[2113]: W1213 13:18:56.433821 2113 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Dec 13 13:18:56.433946 kubelet[2113]: E1213 13:18:56.433888 2113 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:18:56.518022 kubelet[2113]: E1213 13:18:56.517218 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Dec 13 13:18:56.735216 kubelet[2113]: I1213 13:18:56.735182 2113 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:18:57.136328 kubelet[2113]: E1213 13:18:57.136284 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:57.138094 kubelet[2113]: E1213 13:18:57.137996 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:57.139196 kubelet[2113]: E1213 13:18:57.139172 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:58.141496 kubelet[2113]: E1213 13:18:58.141451 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:58.142386 kubelet[2113]: E1213 13:18:58.142176 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:18:58.647213 kubelet[2113]: E1213 13:18:58.647176 2113 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:18:58.737621 kubelet[2113]: I1213 13:18:58.737568 2113 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 13:18:59.105727 kubelet[2113]: I1213 13:18:59.105649 2113 apiserver.go:52] "Watching apiserver" Dec 13 13:18:59.114381 kubelet[2113]: I1213 13:18:59.114326 2113 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:18:59.942634 kubelet[2113]: E1213 13:18:59.940259 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:00.143136 kubelet[2113]: E1213 13:19:00.143107 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:00.589313 systemd[1]: Reloading requested from client PID 2389 ('systemctl') (unit session-5.scope)... Dec 13 13:19:00.589332 systemd[1]: Reloading... Dec 13 13:19:00.666098 zram_generator::config[2428]: No configuration found. Dec 13 13:19:00.802377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:19:00.859099 kubelet[2113]: E1213 13:19:00.859058 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:00.873204 systemd[1]: Reloading finished in 283 ms. Dec 13 13:19:00.903699 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:00.919097 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:19:00.919326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:00.938187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:01.036844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:01.041615 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:19:01.084022 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:19:01.084022 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:19:01.084022 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:19:01.084379 kubelet[2470]: I1213 13:19:01.084074 2470 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:19:01.089365 kubelet[2470]: I1213 13:19:01.089320 2470 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:19:01.089365 kubelet[2470]: I1213 13:19:01.089350 2470 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:19:01.089642 kubelet[2470]: I1213 13:19:01.089583 2470 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:19:01.091002 kubelet[2470]: I1213 13:19:01.090982 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:19:01.093273 kubelet[2470]: I1213 13:19:01.093154 2470 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:19:01.096179 kubelet[2470]: E1213 13:19:01.096149 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:19:01.096179 kubelet[2470]: I1213 13:19:01.096180 2470 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:19:01.100260 kubelet[2470]: I1213 13:19:01.100182 2470 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:19:01.100337 kubelet[2470]: I1213 13:19:01.100299 2470 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:19:01.100407 kubelet[2470]: I1213 13:19:01.100379 2470 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:19:01.100566 kubelet[2470]: I1213 13:19:01.100405 2470 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:19:01.100635 kubelet[2470]: I1213 13:19:01.100576 2470 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:19:01.100635 kubelet[2470]: I1213 13:19:01.100585 2470 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:19:01.100635 kubelet[2470]: I1213 13:19:01.100614 2470 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:19:01.100726 kubelet[2470]: I1213 13:19:01.100716 2470 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:19:01.100754 kubelet[2470]: I1213 13:19:01.100729 2470 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:19:01.100754 kubelet[2470]: I1213 13:19:01.100750 2470 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:19:01.101133 kubelet[2470]: I1213 13:19:01.100760 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:19:01.101703 kubelet[2470]: I1213 13:19:01.101678 2470 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:19:01.102175 kubelet[2470]: I1213 13:19:01.102154 2470 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:19:01.102560 kubelet[2470]: I1213 13:19:01.102542 2470 server.go:1269] "Started kubelet" Dec 13 13:19:01.103487 kubelet[2470]: I1213 13:19:01.103435 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:19:01.103705 kubelet[2470]: I1213 13:19:01.103685 2470 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:19:01.103782 kubelet[2470]: I1213 13:19:01.103742 2470 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:19:01.103854 kubelet[2470]: I1213 13:19:01.103831 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:19:01.105436 kubelet[2470]: I1213 13:19:01.105392 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:19:01.105707 kubelet[2470]: I1213 13:19:01.105675 2470 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:19:01.106398 kubelet[2470]: I1213 13:19:01.106362 2470 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:19:01.106469 kubelet[2470]: I1213 13:19:01.106453 2470 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:19:01.107601 kubelet[2470]: I1213 13:19:01.106598 2470 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:19:01.107601 kubelet[2470]: E1213 13:19:01.107016 2470 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:19:01.108545 kubelet[2470]: I1213 13:19:01.108508 2470 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:19:01.108643 kubelet[2470]: I1213 13:19:01.108620 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:19:01.110265 kubelet[2470]: I1213 13:19:01.110172 2470 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:19:01.126677 kubelet[2470]: E1213 13:19:01.126591 2470 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:19:01.135708 kubelet[2470]: I1213 13:19:01.135652 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:19:01.136913 kubelet[2470]: I1213 13:19:01.136851 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:19:01.136913 kubelet[2470]: I1213 13:19:01.136877 2470 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:19:01.136913 kubelet[2470]: I1213 13:19:01.136893 2470 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:19:01.137636 kubelet[2470]: E1213 13:19:01.137600 2470 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:19:01.158786 kubelet[2470]: I1213 13:19:01.158758 2470 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:19:01.158786 kubelet[2470]: I1213 13:19:01.158778 2470 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:19:01.158959 kubelet[2470]: I1213 13:19:01.158801 2470 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:19:01.158995 kubelet[2470]: I1213 13:19:01.158977 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:19:01.159018 kubelet[2470]: I1213 13:19:01.158994 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:19:01.159018 kubelet[2470]: I1213 13:19:01.159013 2470 policy_none.go:49] "None policy: Start" Dec 13 13:19:01.159575 kubelet[2470]: I1213 13:19:01.159560 2470 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:19:01.159641 kubelet[2470]: I1213 13:19:01.159583 2470 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:19:01.159732 kubelet[2470]: I1213 13:19:01.159718 2470 state_mem.go:75] "Updated machine memory state" Dec 13 13:19:01.163468 kubelet[2470]: I1213 13:19:01.163441 2470 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:19:01.163924 kubelet[2470]: I1213 13:19:01.163627 2470 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:19:01.163924 kubelet[2470]: I1213 13:19:01.163644 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:19:01.163924 kubelet[2470]: I1213 13:19:01.163832 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:19:01.250096 kubelet[2470]: E1213 13:19:01.250048 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.250635 kubelet[2470]: E1213 13:19:01.250573 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:19:01.268214 kubelet[2470]: I1213 13:19:01.268187 2470 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 13:19:01.275549 kubelet[2470]: I1213 13:19:01.275501 2470 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 13:19:01.275698 kubelet[2470]: I1213 13:19:01.275596 2470 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 13:19:01.307855 kubelet[2470]: I1213 13:19:01.307819 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:19:01.308259 kubelet[2470]: I1213 13:19:01.308056 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.308259 kubelet[2470]: I1213 13:19:01.308087 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.308259 kubelet[2470]: I1213 13:19:01.308122 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.308259 kubelet[2470]: I1213 13:19:01.308137 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:19:01.308259 kubelet[2470]: I1213 13:19:01.308152 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/833858ca13ba0084a9f41df4c8fb8061-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"833858ca13ba0084a9f41df4c8fb8061\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:19:01.308388 kubelet[2470]: I1213 13:19:01.308167 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.308388 kubelet[2470]: I1213 13:19:01.308194 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:19:01.308388 kubelet[2470]: I1213 13:19:01.308220 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:19:01.551290 kubelet[2470]: E1213 13:19:01.551087 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:01.551290 kubelet[2470]: E1213 13:19:01.551116 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:01.551290 kubelet[2470]: E1213 13:19:01.551087 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:02.101732 kubelet[2470]: I1213 13:19:02.101682 2470 apiserver.go:52] "Watching apiserver" Dec 13 13:19:02.106684 kubelet[2470]: I1213 13:19:02.106662 2470 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:19:02.148828 kubelet[2470]: E1213 13:19:02.148797 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:02.149799 kubelet[2470]: E1213 13:19:02.149104 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:02.158053 kubelet[2470]: E1213 13:19:02.157820 2470 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:19:02.158053 kubelet[2470]: E1213 13:19:02.158016 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:02.175712 kubelet[2470]: I1213 13:19:02.175636 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.175612267 podStartE2EDuration="3.175612267s" podCreationTimestamp="2024-12-13 13:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:02.168928017 +0000 UTC m=+1.124167740" watchObservedRunningTime="2024-12-13 13:19:02.175612267 +0000 UTC m=+1.130851950" Dec 13 13:19:02.184129 kubelet[2470]: I1213 13:19:02.184066 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.184049804 podStartE2EDuration="2.184049804s" podCreationTimestamp="2024-12-13 13:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:02.183966259 +0000 UTC m=+1.139205982" watchObservedRunningTime="2024-12-13 13:19:02.184049804 +0000 UTC m=+1.139289527" Dec 13 13:19:02.184266 kubelet[2470]: I1213 13:19:02.184150 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.184145078 podStartE2EDuration="1.184145078s" podCreationTimestamp="2024-12-13 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:02.176139438 +0000 UTC m=+1.131379161" watchObservedRunningTime="2024-12-13 13:19:02.184145078 +0000 UTC m=+1.139384801" Dec 13 13:19:02.335613 sudo[1573]: pam_unix(sudo:session): session closed for user root Dec 13 13:19:02.336832 sshd[1572]: Connection closed by 10.0.0.1 port 52722 Dec 13 13:19:02.337245 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:02.340456 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:52722.service: Deactivated successfully. Dec 13 13:19:02.342718 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:19:02.343135 systemd[1]: session-5.scope: Consumed 6.579s CPU time, 158.3M memory peak, 0B memory swap peak. Dec 13 13:19:02.343632 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:19:02.344354 systemd-logind[1419]: Removed session 5. Dec 13 13:19:03.150920 kubelet[2470]: E1213 13:19:03.150826 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:04.534546 kubelet[2470]: E1213 13:19:04.534508 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:05.431415 kubelet[2470]: E1213 13:19:05.431378 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:06.100403 kubelet[2470]: I1213 13:19:06.100367 2470 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:19:06.100760 containerd[1436]: time="2024-12-13T13:19:06.100659736Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:19:06.100958 kubelet[2470]: I1213 13:19:06.100889 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:19:07.037424 systemd[1]: Created slice kubepods-besteffort-pode50142ee_c3b8_416d_b85a_d05d3f27af25.slice - libcontainer container kubepods-besteffort-pode50142ee_c3b8_416d_b85a_d05d3f27af25.slice. Dec 13 13:19:07.050690 kubelet[2470]: I1213 13:19:07.050061 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e50142ee-c3b8-416d-b85a-d05d3f27af25-lib-modules\") pod \"kube-proxy-mdmxp\" (UID: \"e50142ee-c3b8-416d-b85a-d05d3f27af25\") " pod="kube-system/kube-proxy-mdmxp" Dec 13 13:19:07.050690 kubelet[2470]: I1213 13:19:07.050123 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/488f46c7-2090-45e7-bc34-ac3b358d8eb4-cni\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.050690 kubelet[2470]: I1213 13:19:07.050144 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e50142ee-c3b8-416d-b85a-d05d3f27af25-kube-proxy\") pod \"kube-proxy-mdmxp\" (UID: \"e50142ee-c3b8-416d-b85a-d05d3f27af25\") " pod="kube-system/kube-proxy-mdmxp" Dec 13 13:19:07.050690 kubelet[2470]: I1213 13:19:07.050160 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/488f46c7-2090-45e7-bc34-ac3b358d8eb4-xtables-lock\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.050690 kubelet[2470]: I1213 13:19:07.050174 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/488f46c7-2090-45e7-bc34-ac3b358d8eb4-run\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.050894 kubelet[2470]: I1213 13:19:07.050188 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/488f46c7-2090-45e7-bc34-ac3b358d8eb4-flannel-cfg\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.050894 kubelet[2470]: I1213 13:19:07.050203 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtfj\" (UniqueName: \"kubernetes.io/projected/488f46c7-2090-45e7-bc34-ac3b358d8eb4-kube-api-access-shtfj\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.050894 kubelet[2470]: I1213 13:19:07.050221 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e50142ee-c3b8-416d-b85a-d05d3f27af25-xtables-lock\") pod \"kube-proxy-mdmxp\" (UID: \"e50142ee-c3b8-416d-b85a-d05d3f27af25\") " pod="kube-system/kube-proxy-mdmxp" Dec 13 13:19:07.050894 kubelet[2470]: I1213 13:19:07.050235 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bzmv\" (UniqueName: \"kubernetes.io/projected/e50142ee-c3b8-416d-b85a-d05d3f27af25-kube-api-access-2bzmv\") pod \"kube-proxy-mdmxp\" (UID: \"e50142ee-c3b8-416d-b85a-d05d3f27af25\") " pod="kube-system/kube-proxy-mdmxp" Dec 13 13:19:07.050894 kubelet[2470]: I1213 13:19:07.050253 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/488f46c7-2090-45e7-bc34-ac3b358d8eb4-cni-plugin\") pod \"kube-flannel-ds-vr62f\" (UID: \"488f46c7-2090-45e7-bc34-ac3b358d8eb4\") " pod="kube-flannel/kube-flannel-ds-vr62f" Dec 13 13:19:07.052233 systemd[1]: Created slice kubepods-burstable-pod488f46c7_2090_45e7_bc34_ac3b358d8eb4.slice - libcontainer container kubepods-burstable-pod488f46c7_2090_45e7_bc34_ac3b358d8eb4.slice. Dec 13 13:19:07.348311 kubelet[2470]: E1213 13:19:07.348270 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:07.349513 containerd[1436]: time="2024-12-13T13:19:07.349456794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdmxp,Uid:e50142ee-c3b8-416d-b85a-d05d3f27af25,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:07.355539 kubelet[2470]: E1213 13:19:07.355280 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:07.356070 containerd[1436]: time="2024-12-13T13:19:07.355825858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vr62f,Uid:488f46c7-2090-45e7-bc34-ac3b358d8eb4,Namespace:kube-flannel,Attempt:0,}" Dec 13 13:19:07.371164 containerd[1436]: time="2024-12-13T13:19:07.370854498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:07.371164 containerd[1436]: time="2024-12-13T13:19:07.370965152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:07.371164 containerd[1436]: time="2024-12-13T13:19:07.370981166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:07.371164 containerd[1436]: time="2024-12-13T13:19:07.371071163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:07.378075 containerd[1436]: time="2024-12-13T13:19:07.377943936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:07.378075 containerd[1436]: time="2024-12-13T13:19:07.378047344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:07.378075 containerd[1436]: time="2024-12-13T13:19:07.378066640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:07.378553 containerd[1436]: time="2024-12-13T13:19:07.378511179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:07.392228 systemd[1]: Started cri-containerd-adb1aefb6f8285dc32aecf352066f05716837ab489783b471dd6362f2d02a3a0.scope - libcontainer container adb1aefb6f8285dc32aecf352066f05716837ab489783b471dd6362f2d02a3a0. Dec 13 13:19:07.395175 systemd[1]: Started cri-containerd-875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924.scope - libcontainer container 875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924. Dec 13 13:19:07.415185 containerd[1436]: time="2024-12-13T13:19:07.415084008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdmxp,Uid:e50142ee-c3b8-416d-b85a-d05d3f27af25,Namespace:kube-system,Attempt:0,} returns sandbox id \"adb1aefb6f8285dc32aecf352066f05716837ab489783b471dd6362f2d02a3a0\"" Dec 13 13:19:07.416503 kubelet[2470]: E1213 13:19:07.416479 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:07.419284 containerd[1436]: time="2024-12-13T13:19:07.419184980Z" level=info msg="CreateContainer within sandbox \"adb1aefb6f8285dc32aecf352066f05716837ab489783b471dd6362f2d02a3a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:19:07.429503 containerd[1436]: time="2024-12-13T13:19:07.429463134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vr62f,Uid:488f46c7-2090-45e7-bc34-ac3b358d8eb4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\"" Dec 13 13:19:07.430252 kubelet[2470]: E1213 13:19:07.430028 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:07.432145 containerd[1436]: time="2024-12-13T13:19:07.432107466Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 13:19:07.436521 containerd[1436]: time="2024-12-13T13:19:07.436477628Z" level=info msg="CreateContainer within sandbox \"adb1aefb6f8285dc32aecf352066f05716837ab489783b471dd6362f2d02a3a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e21972445b52e7c8917ad10436862d6a6fc4ba9eff44ea91fe93bea208c595d4\"" Dec 13 13:19:07.437078 containerd[1436]: time="2024-12-13T13:19:07.437020691Z" level=info msg="StartContainer for \"e21972445b52e7c8917ad10436862d6a6fc4ba9eff44ea91fe93bea208c595d4\"" Dec 13 13:19:07.465108 systemd[1]: Started cri-containerd-e21972445b52e7c8917ad10436862d6a6fc4ba9eff44ea91fe93bea208c595d4.scope - libcontainer container e21972445b52e7c8917ad10436862d6a6fc4ba9eff44ea91fe93bea208c595d4. Dec 13 13:19:07.495896 containerd[1436]: time="2024-12-13T13:19:07.495773529Z" level=info msg="StartContainer for \"e21972445b52e7c8917ad10436862d6a6fc4ba9eff44ea91fe93bea208c595d4\" returns successfully" Dec 13 13:19:08.169871 kubelet[2470]: E1213 13:19:08.169833 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:08.180501 kubelet[2470]: I1213 13:19:08.179535 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdmxp" podStartSLOduration=1.179516406 podStartE2EDuration="1.179516406s" podCreationTimestamp="2024-12-13 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:08.179302514 +0000 UTC m=+7.134542237" watchObservedRunningTime="2024-12-13 13:19:08.179516406 +0000 UTC m=+7.134756129" Dec 13 13:19:08.417571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123161481.mount: Deactivated successfully. Dec 13 13:19:08.443508 containerd[1436]: time="2024-12-13T13:19:08.442639498Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:08.443508 containerd[1436]: time="2024-12-13T13:19:08.443134817Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Dec 13 13:19:08.444216 containerd[1436]: time="2024-12-13T13:19:08.444179019Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:08.446731 containerd[1436]: time="2024-12-13T13:19:08.446694766Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:08.447742 containerd[1436]: time="2024-12-13T13:19:08.447705501Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.015558001s" Dec 13 13:19:08.448273 containerd[1436]: time="2024-12-13T13:19:08.448238570Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 13:19:08.450347 containerd[1436]: time="2024-12-13T13:19:08.450317886Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 13:19:08.459980 containerd[1436]: time="2024-12-13T13:19:08.459941041Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2\"" Dec 13 13:19:08.460983 containerd[1436]: time="2024-12-13T13:19:08.460892768Z" level=info msg="StartContainer for \"e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2\"" Dec 13 13:19:08.462509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482223441.mount: Deactivated successfully. Dec 13 13:19:08.486165 systemd[1]: Started cri-containerd-e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2.scope - libcontainer container e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2. Dec 13 13:19:08.508704 containerd[1436]: time="2024-12-13T13:19:08.508645413Z" level=info msg="StartContainer for \"e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2\" returns successfully" Dec 13 13:19:08.514393 systemd[1]: cri-containerd-e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2.scope: Deactivated successfully. Dec 13 13:19:08.550500 containerd[1436]: time="2024-12-13T13:19:08.550286691Z" level=info msg="shim disconnected" id=e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2 namespace=k8s.io Dec 13 13:19:08.550500 containerd[1436]: time="2024-12-13T13:19:08.550341496Z" level=warning msg="cleaning up after shim disconnected" id=e0ac57a8bb511d6cc79d5e40e0c197238249e9b15fe32b06ee78bfb9909586f2 namespace=k8s.io Dec 13 13:19:08.550500 containerd[1436]: time="2024-12-13T13:19:08.550349742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:19:09.172351 kubelet[2470]: E1213 13:19:09.172199 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:09.173088 containerd[1436]: time="2024-12-13T13:19:09.173033675Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 13:19:09.669013 kubelet[2470]: E1213 13:19:09.668976 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:10.173999 kubelet[2470]: E1213 13:19:10.173969 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:10.209860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275535496.mount: Deactivated successfully. Dec 13 13:19:10.688299 containerd[1436]: time="2024-12-13T13:19:10.688253260Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:10.688920 containerd[1436]: time="2024-12-13T13:19:10.688855014Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Dec 13 13:19:10.689731 containerd[1436]: time="2024-12-13T13:19:10.689702227Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:10.692447 containerd[1436]: time="2024-12-13T13:19:10.692414987Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:10.694116 containerd[1436]: time="2024-12-13T13:19:10.693628184Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.52055596s" Dec 13 13:19:10.694116 containerd[1436]: time="2024-12-13T13:19:10.693660648Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 13:19:10.696656 containerd[1436]: time="2024-12-13T13:19:10.696622028Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:19:10.714449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685960357.mount: Deactivated successfully. Dec 13 13:19:10.716727 containerd[1436]: time="2024-12-13T13:19:10.716564801Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab\"" Dec 13 13:19:10.717918 containerd[1436]: time="2024-12-13T13:19:10.717111477Z" level=info msg="StartContainer for \"dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab\"" Dec 13 13:19:10.742119 systemd[1]: Started cri-containerd-dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab.scope - libcontainer container dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab. Dec 13 13:19:10.773070 systemd[1]: cri-containerd-dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab.scope: Deactivated successfully. Dec 13 13:19:10.888460 containerd[1436]: time="2024-12-13T13:19:10.888409640Z" level=info msg="StartContainer for \"dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab\" returns successfully" Dec 13 13:19:10.895698 kubelet[2470]: I1213 13:19:10.895459 2470 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 13:19:10.916846 containerd[1436]: time="2024-12-13T13:19:10.916773300Z" level=info msg="shim disconnected" id=dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab namespace=k8s.io Dec 13 13:19:10.916846 containerd[1436]: time="2024-12-13T13:19:10.916827939Z" level=warning msg="cleaning up after shim disconnected" id=dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab namespace=k8s.io Dec 13 13:19:10.916846 containerd[1436]: time="2024-12-13T13:19:10.916843391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:19:10.929535 systemd[1]: Created slice kubepods-burstable-podbdfa528e_c9b4_4afb_9fec_ecbc7ae9d2d1.slice - libcontainer container kubepods-burstable-podbdfa528e_c9b4_4afb_9fec_ecbc7ae9d2d1.slice. Dec 13 13:19:10.936871 systemd[1]: Created slice kubepods-burstable-podc6b98a3f_6f9f_4ff9_8ee2_1f64cb5db9e1.slice - libcontainer container kubepods-burstable-podc6b98a3f_6f9f_4ff9_8ee2_1f64cb5db9e1.slice. Dec 13 13:19:10.976744 kubelet[2470]: I1213 13:19:10.976695 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9cw4\" (UniqueName: \"kubernetes.io/projected/bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1-kube-api-access-m9cw4\") pod \"coredns-6f6b679f8f-h9rv2\" (UID: \"bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1\") " pod="kube-system/coredns-6f6b679f8f-h9rv2" Dec 13 13:19:10.976744 kubelet[2470]: I1213 13:19:10.976741 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1-config-volume\") pod \"coredns-6f6b679f8f-h9rv2\" (UID: \"bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1\") " pod="kube-system/coredns-6f6b679f8f-h9rv2" Dec 13 13:19:10.976948 kubelet[2470]: I1213 13:19:10.976761 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnltc\" (UniqueName: \"kubernetes.io/projected/c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1-kube-api-access-dnltc\") pod \"coredns-6f6b679f8f-xvpcf\" (UID: \"c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1\") " pod="kube-system/coredns-6f6b679f8f-xvpcf" Dec 13 13:19:10.976948 kubelet[2470]: I1213 13:19:10.976779 2470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1-config-volume\") pod \"coredns-6f6b679f8f-xvpcf\" (UID: \"c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1\") " pod="kube-system/coredns-6f6b679f8f-xvpcf" Dec 13 13:19:11.176258 kubelet[2470]: E1213 13:19:11.176222 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:11.179342 containerd[1436]: time="2024-12-13T13:19:11.179080541Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 13:19:11.209525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc73a16eec35d5f2e4f6b57f33075bdf206ca872a31ae89908e46f698a5adab-rootfs.mount: Deactivated successfully. Dec 13 13:19:11.210390 containerd[1436]: time="2024-12-13T13:19:11.210343757Z" level=info msg="CreateContainer within sandbox \"875162e8c55a77557b4e9796f100447fdd02038582fc4095938cf7380e428924\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"005896cba0af837ac436de7ab3cbfab90fc060c020ec2cbcd48e2443bc73b990\"" Dec 13 13:19:11.211170 containerd[1436]: time="2024-12-13T13:19:11.211144145Z" level=info msg="StartContainer for \"005896cba0af837ac436de7ab3cbfab90fc060c020ec2cbcd48e2443bc73b990\"" Dec 13 13:19:11.233373 kubelet[2470]: E1213 13:19:11.233334 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:11.234326 containerd[1436]: time="2024-12-13T13:19:11.234285997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h9rv2,Uid:bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:11.235066 systemd[1]: Started cri-containerd-005896cba0af837ac436de7ab3cbfab90fc060c020ec2cbcd48e2443bc73b990.scope - libcontainer container 005896cba0af837ac436de7ab3cbfab90fc060c020ec2cbcd48e2443bc73b990. Dec 13 13:19:11.244572 kubelet[2470]: E1213 13:19:11.244198 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:11.246881 containerd[1436]: time="2024-12-13T13:19:11.246488996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xvpcf,Uid:c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:11.296923 containerd[1436]: time="2024-12-13T13:19:11.296863423Z" level=info msg="StartContainer for \"005896cba0af837ac436de7ab3cbfab90fc060c020ec2cbcd48e2443bc73b990\" returns successfully" Dec 13 13:19:11.339157 containerd[1436]: time="2024-12-13T13:19:11.339117248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h9rv2,Uid:bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:19:11.339876 kubelet[2470]: E1213 13:19:11.339838 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:19:11.340017 kubelet[2470]: E1213 13:19:11.339940 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-h9rv2" Dec 13 13:19:11.341059 containerd[1436]: time="2024-12-13T13:19:11.340960870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xvpcf,Uid:c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:19:11.341264 kubelet[2470]: E1213 13:19:11.341232 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:19:11.341315 kubelet[2470]: E1213 13:19:11.341276 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xvpcf" Dec 13 13:19:11.342758 kubelet[2470]: E1213 13:19:11.342721 2470 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xvpcf" Dec 13 13:19:11.342828 kubelet[2470]: E1213 13:19:11.342796 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xvpcf_kube-system(c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xvpcf_kube-system(c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-xvpcf" podUID="c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1" Dec 13 13:19:11.343500 kubelet[2470]: E1213 13:19:11.343471 2470 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-h9rv2" Dec 13 13:19:11.343544 kubelet[2470]: E1213 13:19:11.343523 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-h9rv2_kube-system(bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-h9rv2_kube-system(bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-h9rv2" podUID="bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1" Dec 13 13:19:12.179628 kubelet[2470]: E1213 13:19:12.179585 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:12.191031 kubelet[2470]: I1213 13:19:12.190777 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vr62f" podStartSLOduration=1.927619808 podStartE2EDuration="5.190758434s" podCreationTimestamp="2024-12-13 13:19:07 +0000 UTC" firstStartedPulling="2024-12-13 13:19:07.431341894 +0000 UTC m=+6.386581617" lastFinishedPulling="2024-12-13 13:19:10.69448052 +0000 UTC m=+9.649720243" observedRunningTime="2024-12-13 13:19:12.190616742 +0000 UTC m=+11.145856465" watchObservedRunningTime="2024-12-13 13:19:12.190758434 +0000 UTC m=+11.145998157" Dec 13 13:19:12.207517 systemd[1]: run-netns-cni\x2df0b03097\x2dbd44\x2d2312\x2d62f4\x2d3434528bf9cc.mount: Deactivated successfully. Dec 13 13:19:12.207607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9406d9b6dda8f5d2182e92a83d3096365209a816a77a31ba5b69a85269f53aa-shm.mount: Deactivated successfully. Dec 13 13:19:12.207660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b6f6f4bcb5729bca9596a17b7c623a44790a27ff2f01e452666450b42f9877c-shm.mount: Deactivated successfully. Dec 13 13:19:12.406115 systemd-networkd[1380]: flannel.1: Link UP Dec 13 13:19:12.406128 systemd-networkd[1380]: flannel.1: Gained carrier Dec 13 13:19:13.181486 kubelet[2470]: E1213 13:19:13.181444 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:13.627138 systemd-networkd[1380]: flannel.1: Gained IPv6LL Dec 13 13:19:14.541807 kubelet[2470]: E1213 13:19:14.541779 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:15.438349 kubelet[2470]: E1213 13:19:15.438310 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:16.129057 update_engine[1423]: I20241213 13:19:16.128960 1423 update_attempter.cc:509] Updating boot flags... Dec 13 13:19:16.148948 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3127) Dec 13 13:19:16.178960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3127) Dec 13 13:19:25.142159 kubelet[2470]: E1213 13:19:25.137899 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:25.142577 containerd[1436]: time="2024-12-13T13:19:25.142413985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h9rv2,Uid:bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:25.151193 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:56802.service - OpenSSH per-connection server daemon (10.0.0.1:56802). Dec 13 13:19:25.172212 systemd-networkd[1380]: cni0: Link UP Dec 13 13:19:25.172219 systemd-networkd[1380]: cni0: Gained carrier Dec 13 13:19:25.172487 systemd-networkd[1380]: cni0: Lost carrier Dec 13 13:19:25.182079 systemd-networkd[1380]: veth8b827abb: Link UP Dec 13 13:19:25.183952 kernel: cni0: port 1(veth8b827abb) entered blocking state Dec 13 13:19:25.184019 kernel: cni0: port 1(veth8b827abb) entered disabled state Dec 13 13:19:25.184038 kernel: veth8b827abb: entered allmulticast mode Dec 13 13:19:25.184057 kernel: veth8b827abb: entered promiscuous mode Dec 13 13:19:25.185014 kernel: cni0: port 1(veth8b827abb) entered blocking state Dec 13 13:19:25.185059 kernel: cni0: port 1(veth8b827abb) entered forwarding state Dec 13 13:19:25.186271 kernel: cni0: port 1(veth8b827abb) entered disabled state Dec 13 13:19:25.195824 sshd[3177]: Accepted publickey for core from 10.0.0.1 port 56802 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:25.197503 kernel: cni0: port 1(veth8b827abb) entered blocking state Dec 13 13:19:25.197554 kernel: cni0: port 1(veth8b827abb) entered forwarding state Dec 13 13:19:25.197484 systemd-networkd[1380]: veth8b827abb: Gained carrier Dec 13 13:19:25.198405 sshd-session[3177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:25.198430 systemd-networkd[1380]: cni0: Gained carrier Dec 13 13:19:25.201253 containerd[1436]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Dec 13 13:19:25.201253 containerd[1436]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:19:25.203000 systemd-logind[1419]: New session 6 of user core. Dec 13 13:19:25.206043 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:19:25.217625 containerd[1436]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:19:25.217394815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:25.217625 containerd[1436]: time="2024-12-13T13:19:25.217454195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:25.217625 containerd[1436]: time="2024-12-13T13:19:25.217468120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:25.217625 containerd[1436]: time="2024-12-13T13:19:25.217539105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:25.236097 systemd[1]: Started cri-containerd-6d58c1f07ee55916b32a4b87c3708164159eca9133332de16b8772f17a4546ad.scope - libcontainer container 6d58c1f07ee55916b32a4b87c3708164159eca9133332de16b8772f17a4546ad. Dec 13 13:19:25.244886 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:19:25.263006 containerd[1436]: time="2024-12-13T13:19:25.262230907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h9rv2,Uid:bdfa528e-c9b4-4afb-9fec-ecbc7ae9d2d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d58c1f07ee55916b32a4b87c3708164159eca9133332de16b8772f17a4546ad\"" Dec 13 13:19:25.264974 kubelet[2470]: E1213 13:19:25.264947 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:25.268152 containerd[1436]: time="2024-12-13T13:19:25.268124039Z" level=info msg="CreateContainer within sandbox \"6d58c1f07ee55916b32a4b87c3708164159eca9133332de16b8772f17a4546ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:19:25.277974 containerd[1436]: time="2024-12-13T13:19:25.277563926Z" level=info msg="CreateContainer within sandbox \"6d58c1f07ee55916b32a4b87c3708164159eca9133332de16b8772f17a4546ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2eade417d8e219a4071f4e0cbe0903fda17f76f423ec8445511822890acf1267\"" Dec 13 13:19:25.278195 containerd[1436]: time="2024-12-13T13:19:25.278167056Z" level=info msg="StartContainer for \"2eade417d8e219a4071f4e0cbe0903fda17f76f423ec8445511822890acf1267\"" Dec 13 13:19:25.317388 systemd[1]: Started cri-containerd-2eade417d8e219a4071f4e0cbe0903fda17f76f423ec8445511822890acf1267.scope - libcontainer container 2eade417d8e219a4071f4e0cbe0903fda17f76f423ec8445511822890acf1267. Dec 13 13:19:25.347249 sshd[3220]: Connection closed by 10.0.0.1 port 56802 Dec 13 13:19:25.348259 containerd[1436]: time="2024-12-13T13:19:25.348225372Z" level=info msg="StartContainer for \"2eade417d8e219a4071f4e0cbe0903fda17f76f423ec8445511822890acf1267\" returns successfully" Dec 13 13:19:25.348754 sshd-session[3177]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:25.353230 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:56802.service: Deactivated successfully. Dec 13 13:19:25.356985 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:19:25.357673 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:19:25.358893 systemd-logind[1419]: Removed session 6. Dec 13 13:19:26.138459 kubelet[2470]: E1213 13:19:26.138422 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:26.139156 containerd[1436]: time="2024-12-13T13:19:26.138797876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xvpcf,Uid:c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:26.154788 systemd-networkd[1380]: vethf9a42a32: Link UP Dec 13 13:19:26.159169 kernel: cni0: port 2(vethf9a42a32) entered blocking state Dec 13 13:19:26.159286 kernel: cni0: port 2(vethf9a42a32) entered disabled state Dec 13 13:19:26.159312 kernel: vethf9a42a32: entered allmulticast mode Dec 13 13:19:26.159343 kernel: vethf9a42a32: entered promiscuous mode Dec 13 13:19:26.159357 kernel: cni0: port 2(vethf9a42a32) entered blocking state Dec 13 13:19:26.159368 kernel: cni0: port 2(vethf9a42a32) entered forwarding state Dec 13 13:19:26.162490 systemd-networkd[1380]: vethf9a42a32: Gained carrier Dec 13 13:19:26.167372 containerd[1436]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Dec 13 13:19:26.167372 containerd[1436]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:19:26.182192 containerd[1436]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:19:26.181935199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:26.182192 containerd[1436]: time="2024-12-13T13:19:26.181984935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:26.182192 containerd[1436]: time="2024-12-13T13:19:26.181994858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:26.182192 containerd[1436]: time="2024-12-13T13:19:26.182067483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:26.204065 systemd[1]: Started cri-containerd-872105d649f5de1312515750812bd44a78a905ea71cc74156d5d03829055e7aa.scope - libcontainer container 872105d649f5de1312515750812bd44a78a905ea71cc74156d5d03829055e7aa. Dec 13 13:19:26.208555 kubelet[2470]: E1213 13:19:26.208235 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:26.218506 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:19:26.220514 kubelet[2470]: I1213 13:19:26.220459 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h9rv2" podStartSLOduration=19.220445696 podStartE2EDuration="19.220445696s" podCreationTimestamp="2024-12-13 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:26.220211898 +0000 UTC m=+25.175451621" watchObservedRunningTime="2024-12-13 13:19:26.220445696 +0000 UTC m=+25.175685379" Dec 13 13:19:26.244075 containerd[1436]: time="2024-12-13T13:19:26.244040374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xvpcf,Uid:c6b98a3f-6f9f-4ff9-8ee2-1f64cb5db9e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"872105d649f5de1312515750812bd44a78a905ea71cc74156d5d03829055e7aa\"" Dec 13 13:19:26.245367 kubelet[2470]: E1213 13:19:26.244839 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:26.246806 containerd[1436]: time="2024-12-13T13:19:26.246725750Z" level=info msg="CreateContainer within sandbox \"872105d649f5de1312515750812bd44a78a905ea71cc74156d5d03829055e7aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:19:26.258616 containerd[1436]: time="2024-12-13T13:19:26.258574426Z" level=info msg="CreateContainer within sandbox \"872105d649f5de1312515750812bd44a78a905ea71cc74156d5d03829055e7aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60e667047c974200c8dcca0a2165d385806b5317bd305088a20c1c3f46d775e6\"" Dec 13 13:19:26.259218 containerd[1436]: time="2024-12-13T13:19:26.259152419Z" level=info msg="StartContainer for \"60e667047c974200c8dcca0a2165d385806b5317bd305088a20c1c3f46d775e6\"" Dec 13 13:19:26.290063 systemd[1]: Started cri-containerd-60e667047c974200c8dcca0a2165d385806b5317bd305088a20c1c3f46d775e6.scope - libcontainer container 60e667047c974200c8dcca0a2165d385806b5317bd305088a20c1c3f46d775e6. Dec 13 13:19:26.310824 containerd[1436]: time="2024-12-13T13:19:26.310722117Z" level=info msg="StartContainer for \"60e667047c974200c8dcca0a2165d385806b5317bd305088a20c1c3f46d775e6\" returns successfully" Dec 13 13:19:26.876041 systemd-networkd[1380]: cni0: Gained IPv6LL Dec 13 13:19:27.067032 systemd-networkd[1380]: veth8b827abb: Gained IPv6LL Dec 13 13:19:27.224715 kubelet[2470]: E1213 13:19:27.220365 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:27.224715 kubelet[2470]: E1213 13:19:27.220965 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:27.232568 kubelet[2470]: I1213 13:19:27.232520 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xvpcf" podStartSLOduration=20.232505448 podStartE2EDuration="20.232505448s" podCreationTimestamp="2024-12-13 13:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:19:27.23210544 +0000 UTC m=+26.187345203" watchObservedRunningTime="2024-12-13 13:19:27.232505448 +0000 UTC m=+26.187745131" Dec 13 13:19:27.323059 systemd-networkd[1380]: vethf9a42a32: Gained IPv6LL Dec 13 13:19:28.221796 kubelet[2470]: E1213 13:19:28.221741 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:28.222476 kubelet[2470]: E1213 13:19:28.222445 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:29.223122 kubelet[2470]: E1213 13:19:29.223090 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:30.362570 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:56818.service - OpenSSH per-connection server daemon (10.0.0.1:56818). Dec 13 13:19:30.402938 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 56818 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:30.404239 sshd-session[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:30.408058 systemd-logind[1419]: New session 7 of user core. Dec 13 13:19:30.425113 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:19:30.536434 sshd[3459]: Connection closed by 10.0.0.1 port 56818 Dec 13 13:19:30.537138 sshd-session[3457]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:30.541034 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:56818.service: Deactivated successfully. Dec 13 13:19:30.542680 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:19:30.544255 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:19:30.545101 systemd-logind[1419]: Removed session 7. Dec 13 13:19:35.551340 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:36064.service - OpenSSH per-connection server daemon (10.0.0.1:36064). Dec 13 13:19:35.591909 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 36064 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:35.593170 sshd-session[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:35.596625 systemd-logind[1419]: New session 8 of user core. Dec 13 13:19:35.607055 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:19:35.713448 sshd[3498]: Connection closed by 10.0.0.1 port 36064 Dec 13 13:19:35.713796 sshd-session[3496]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:35.726331 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:36064.service: Deactivated successfully. Dec 13 13:19:35.727875 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:19:35.731060 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:19:35.746305 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:36080.service - OpenSSH per-connection server daemon (10.0.0.1:36080). Dec 13 13:19:35.747447 systemd-logind[1419]: Removed session 8. Dec 13 13:19:35.781268 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:35.782334 sshd-session[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:35.785542 systemd-logind[1419]: New session 9 of user core. Dec 13 13:19:35.799028 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:19:35.940077 sshd[3514]: Connection closed by 10.0.0.1 port 36080 Dec 13 13:19:35.940730 sshd-session[3511]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:35.954062 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:36080.service: Deactivated successfully. Dec 13 13:19:35.957166 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:19:35.961959 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:19:35.969189 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:36090.service - OpenSSH per-connection server daemon (10.0.0.1:36090). Dec 13 13:19:35.970401 systemd-logind[1419]: Removed session 9. Dec 13 13:19:36.006335 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 36090 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:36.007401 sshd-session[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:36.011201 systemd-logind[1419]: New session 10 of user core. Dec 13 13:19:36.022032 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:19:36.125673 sshd[3527]: Connection closed by 10.0.0.1 port 36090 Dec 13 13:19:36.126020 sshd-session[3525]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:36.129026 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:36090.service: Deactivated successfully. Dec 13 13:19:36.130559 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:19:36.133474 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:19:36.134645 systemd-logind[1419]: Removed session 10. Dec 13 13:19:41.138038 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:36092.service - OpenSSH per-connection server daemon (10.0.0.1:36092). Dec 13 13:19:41.212190 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 36092 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:41.213549 sshd-session[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:41.217813 systemd-logind[1419]: New session 11 of user core. Dec 13 13:19:41.233648 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:19:41.347000 sshd[3566]: Connection closed by 10.0.0.1 port 36092 Dec 13 13:19:41.347701 sshd-session[3564]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:41.360275 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:36092.service: Deactivated successfully. Dec 13 13:19:41.362336 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:19:41.363028 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:19:41.369269 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:36106.service - OpenSSH per-connection server daemon (10.0.0.1:36106). Dec 13 13:19:41.370402 systemd-logind[1419]: Removed session 11. Dec 13 13:19:41.408614 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 36106 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:41.410250 sshd-session[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:41.415619 systemd-logind[1419]: New session 12 of user core. Dec 13 13:19:41.422111 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:19:41.596891 sshd[3580]: Connection closed by 10.0.0.1 port 36106 Dec 13 13:19:41.597515 sshd-session[3578]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:41.606499 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:36106.service: Deactivated successfully. Dec 13 13:19:41.608253 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:19:41.609591 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:19:41.618257 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Dec 13 13:19:41.619151 systemd-logind[1419]: Removed session 12. Dec 13 13:19:41.656805 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:41.658101 sshd-session[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:41.662453 systemd-logind[1419]: New session 13 of user core. Dec 13 13:19:41.674169 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:19:42.823178 sshd[3593]: Connection closed by 10.0.0.1 port 36110 Dec 13 13:19:42.823734 sshd-session[3590]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:42.830644 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:36110.service: Deactivated successfully. Dec 13 13:19:42.833553 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:19:42.835854 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:19:42.845808 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:60082.service - OpenSSH per-connection server daemon (10.0.0.1:60082). Dec 13 13:19:42.847204 systemd-logind[1419]: Removed session 13. Dec 13 13:19:42.906264 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 60082 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:42.907726 sshd-session[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:42.912031 systemd-logind[1419]: New session 14 of user core. Dec 13 13:19:42.924072 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:19:43.152379 sshd[3635]: Connection closed by 10.0.0.1 port 60082 Dec 13 13:19:43.152076 sshd-session[3633]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:43.158573 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:60082.service: Deactivated successfully. Dec 13 13:19:43.162225 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:19:43.164662 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:19:43.175498 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:60094.service - OpenSSH per-connection server daemon (10.0.0.1:60094). Dec 13 13:19:43.176516 systemd-logind[1419]: Removed session 14. Dec 13 13:19:43.213735 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 60094 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:43.215066 sshd-session[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:43.218711 systemd-logind[1419]: New session 15 of user core. Dec 13 13:19:43.232082 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:19:43.347949 sshd[3648]: Connection closed by 10.0.0.1 port 60094 Dec 13 13:19:43.347413 sshd-session[3646]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:43.350690 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:60094.service: Deactivated successfully. Dec 13 13:19:43.352436 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:19:43.353128 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:19:43.354015 systemd-logind[1419]: Removed session 15. Dec 13 13:19:48.359337 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:60096.service - OpenSSH per-connection server daemon (10.0.0.1:60096). Dec 13 13:19:48.398511 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 60096 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:48.399640 sshd-session[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:48.403228 systemd-logind[1419]: New session 16 of user core. Dec 13 13:19:48.416038 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:19:48.521635 sshd[3686]: Connection closed by 10.0.0.1 port 60096 Dec 13 13:19:48.521938 sshd-session[3684]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:48.525386 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:60096.service: Deactivated successfully. Dec 13 13:19:48.528167 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:19:48.528835 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:19:48.529654 systemd-logind[1419]: Removed session 16. Dec 13 13:19:53.533268 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:40694.service - OpenSSH per-connection server daemon (10.0.0.1:40694). Dec 13 13:19:53.572392 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 40694 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:53.573493 sshd-session[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:53.577394 systemd-logind[1419]: New session 17 of user core. Dec 13 13:19:53.581091 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:19:53.684933 sshd[3721]: Connection closed by 10.0.0.1 port 40694 Dec 13 13:19:53.685216 sshd-session[3719]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:53.688140 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:40694.service: Deactivated successfully. Dec 13 13:19:53.689702 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:19:53.691413 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:19:53.692527 systemd-logind[1419]: Removed session 17. Dec 13 13:19:58.697143 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:40700.service - OpenSSH per-connection server daemon (10.0.0.1:40700). Dec 13 13:19:58.736864 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 40700 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:19:58.738077 sshd-session[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:58.742037 systemd-logind[1419]: New session 18 of user core. Dec 13 13:19:58.750077 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:19:58.860442 sshd[3758]: Connection closed by 10.0.0.1 port 40700 Dec 13 13:19:58.860803 sshd-session[3756]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:58.864067 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:40700.service: Deactivated successfully. Dec 13 13:19:58.865689 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:19:58.866342 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:19:58.867081 systemd-logind[1419]: Removed session 18.