Dec 13 13:07:15.914637 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:07:15.914659 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:07:15.914668 kernel: KASLR enabled Dec 13 13:07:15.914674 kernel: efi: EFI v2.7 by EDK II Dec 13 13:07:15.914679 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:07:15.914684 kernel: random: crng init done Dec 13 13:07:15.914694 kernel: secureboot: Secure boot disabled Dec 13 13:07:15.914699 kernel: ACPI: Early table checksum verification disabled Dec 13 13:07:15.914707 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:07:15.914716 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:07:15.914723 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914728 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914734 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914765 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914773 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914782 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914788 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914794 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914800 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:07:15.914806 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:07:15.914812 kernel: NUMA: Failed to initialise from firmware Dec 13 13:07:15.914818 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:07:15.914824 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Dec 13 13:07:15.914830 kernel: Zone ranges: Dec 13 13:07:15.914836 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:07:15.914844 kernel: DMA32 empty Dec 13 13:07:15.914862 kernel: Normal empty Dec 13 13:07:15.914868 kernel: Movable zone start for each node Dec 13 13:07:15.914874 kernel: Early memory node ranges Dec 13 13:07:15.914880 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:07:15.914886 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:07:15.914892 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:07:15.914898 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:07:15.914904 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:07:15.914910 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:07:15.914916 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:07:15.914922 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:07:15.914930 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:07:15.914936 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:07:15.914942 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:07:15.914950 kernel: psci: probing for conduit method from ACPI. Dec 13 13:07:15.914957 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:07:15.914963 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:07:15.914970 kernel: psci: Trusted OS migration not required Dec 13 13:07:15.914977 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:07:15.914983 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:07:15.914990 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:07:15.914996 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:07:15.915003 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:07:15.915009 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:07:15.915016 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:07:15.915022 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:07:15.915028 kernel: CPU features: detected: Spectre-v4 Dec 13 13:07:15.915036 kernel: CPU features: detected: Spectre-BHB Dec 13 13:07:15.915043 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:07:15.915049 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:07:15.915055 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:07:15.915061 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:07:15.915068 kernel: alternatives: applying boot alternatives Dec 13 13:07:15.915075 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:07:15.915082 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:07:15.915088 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:07:15.915095 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:07:15.915101 kernel: Fallback order for Node 0: 0 Dec 13 13:07:15.915109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:07:15.915115 kernel: Policy zone: DMA Dec 13 13:07:15.915121 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:07:15.915128 kernel: software IO TLB: area num 4. Dec 13 13:07:15.915134 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:07:15.915141 kernel: Memory: 2385936K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186352K reserved, 0K cma-reserved) Dec 13 13:07:15.915147 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:07:15.915154 kernel: trace event string verifier disabled Dec 13 13:07:15.915160 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:07:15.915167 kernel: rcu: RCU event tracing is enabled. Dec 13 13:07:15.915174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:07:15.915180 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:07:15.915188 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:07:15.915195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:07:15.915201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:07:15.915207 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:07:15.915214 kernel: GICv3: 256 SPIs implemented Dec 13 13:07:15.915220 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:07:15.915226 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:07:15.915233 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:07:15.915239 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:07:15.915245 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:07:15.915252 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:07:15.915260 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:07:15.915267 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:07:15.915273 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:07:15.915279 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:07:15.915286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:07:15.915292 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:07:15.915299 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:07:15.915305 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:07:15.915311 kernel: arm-pv: using stolen time PV Dec 13 13:07:15.915318 kernel: Console: colour dummy device 80x25 Dec 13 13:07:15.915325 kernel: ACPI: Core revision 20230628 Dec 13 13:07:15.915333 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:07:15.915339 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:07:15.915346 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:07:15.915352 kernel: landlock: Up and running. Dec 13 13:07:15.915359 kernel: SELinux: Initializing. Dec 13 13:07:15.915365 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:07:15.915372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:07:15.915379 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:07:15.915385 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:07:15.915393 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:07:15.915400 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:07:15.915406 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:07:15.915413 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:07:15.915419 kernel: Remapping and enabling EFI services. Dec 13 13:07:15.915426 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:07:15.915433 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:07:15.915439 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:07:15.915446 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:07:15.915453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:07:15.915460 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:07:15.915471 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:07:15.915479 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:07:15.915486 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:07:15.915493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:07:15.915499 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:07:15.915507 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:07:15.915519 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:07:15.915528 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:07:15.915535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:07:15.915542 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:07:15.915548 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:07:15.915555 kernel: SMP: Total of 4 processors activated. Dec 13 13:07:15.915562 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:07:15.915569 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:07:15.915576 kernel: CPU features: detected: Common not Private translations Dec 13 13:07:15.915583 kernel: CPU features: detected: CRC32 instructions Dec 13 13:07:15.915591 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:07:15.915598 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:07:15.915605 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:07:15.915612 kernel: CPU features: detected: Privileged Access Never Dec 13 13:07:15.915619 kernel: CPU features: detected: RAS Extension Support Dec 13 13:07:15.915626 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:07:15.915633 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:07:15.915640 kernel: alternatives: applying system-wide alternatives Dec 13 13:07:15.915647 kernel: devtmpfs: initialized Dec 13 13:07:15.915655 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:07:15.915663 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:07:15.915669 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:07:15.915676 kernel: SMBIOS 3.0.0 present. Dec 13 13:07:15.915683 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:07:15.915690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:07:15.915697 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:07:15.915704 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:07:15.915711 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:07:15.915719 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:07:15.915727 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Dec 13 13:07:15.915734 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:07:15.915825 kernel: cpuidle: using governor menu Dec 13 13:07:15.915835 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:07:15.915842 kernel: ASID allocator initialised with 32768 entries Dec 13 13:07:15.915849 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:07:15.915856 kernel: Serial: AMBA PL011 UART driver Dec 13 13:07:15.915863 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:07:15.915873 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:07:15.915880 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:07:15.915887 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:07:15.915894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:07:15.915901 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:07:15.915908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:07:15.915915 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:07:15.915922 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:07:15.915929 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:07:15.915937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:07:15.915944 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:07:15.915951 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:07:15.915958 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:07:15.915965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:07:15.915972 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:07:15.915979 kernel: ACPI: Interpreter enabled Dec 13 13:07:15.915986 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:07:15.915992 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:07:15.916001 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:07:15.916008 kernel: printk: console [ttyAMA0] enabled Dec 13 13:07:15.916015 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:07:15.916146 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:07:15.916216 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:07:15.916279 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:07:15.916342 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:07:15.916405 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:07:15.916414 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:07:15.916422 kernel: PCI host bridge to bus 0000:00 Dec 13 13:07:15.916488 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:07:15.916561 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:07:15.916624 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:07:15.916679 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:07:15.916783 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:07:15.916861 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:07:15.916927 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:07:15.916992 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:07:15.917072 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:07:15.917138 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:07:15.917201 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:07:15.917267 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:07:15.917325 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:07:15.917381 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:07:15.917436 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:07:15.917445 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:07:15.917452 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:07:15.917459 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:07:15.917466 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:07:15.917475 kernel: iommu: Default domain type: Translated Dec 13 13:07:15.917482 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:07:15.917489 kernel: efivars: Registered efivars operations Dec 13 13:07:15.917496 kernel: vgaarb: loaded Dec 13 13:07:15.917503 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:07:15.917510 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:07:15.917526 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:07:15.917533 kernel: pnp: PnP ACPI init Dec 13 13:07:15.917607 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:07:15.917620 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:07:15.917627 kernel: NET: Registered PF_INET protocol family Dec 13 13:07:15.917635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:07:15.917642 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:07:15.917649 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:07:15.917656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:07:15.917663 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:07:15.917670 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:07:15.917678 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:07:15.917685 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:07:15.917692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:07:15.917699 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:07:15.917706 kernel: kvm [1]: HYP mode not available Dec 13 13:07:15.917713 kernel: Initialise system trusted keyrings Dec 13 13:07:15.917720 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:07:15.917727 kernel: Key type asymmetric registered Dec 13 13:07:15.917734 kernel: Asymmetric key parser 'x509' registered Dec 13 13:07:15.917780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:07:15.917788 kernel: io scheduler mq-deadline registered Dec 13 13:07:15.917795 kernel: io scheduler kyber registered Dec 13 13:07:15.917802 kernel: io scheduler bfq registered Dec 13 13:07:15.917809 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:07:15.917816 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:07:15.917824 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:07:15.917901 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:07:15.917912 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:07:15.917922 kernel: thunder_xcv, ver 1.0 Dec 13 13:07:15.917929 kernel: thunder_bgx, ver 1.0 Dec 13 13:07:15.917936 kernel: nicpf, ver 1.0 Dec 13 13:07:15.917943 kernel: nicvf, ver 1.0 Dec 13 13:07:15.918022 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:07:15.918085 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:07:15 UTC (1734095235) Dec 13 13:07:15.918095 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:07:15.918102 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:07:15.918112 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:07:15.918119 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:07:15.918126 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:07:15.918133 kernel: Segment Routing with IPv6 Dec 13 13:07:15.918139 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:07:15.918146 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:07:15.918153 kernel: Key type dns_resolver registered Dec 13 13:07:15.918160 kernel: registered taskstats version 1 Dec 13 13:07:15.918167 kernel: Loading compiled-in X.509 certificates Dec 13 13:07:15.918175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:07:15.918182 kernel: Key type .fscrypt registered Dec 13 13:07:15.918189 kernel: Key type fscrypt-provisioning registered Dec 13 13:07:15.918196 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:07:15.918203 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:07:15.918210 kernel: ima: No architecture policies found Dec 13 13:07:15.918217 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:07:15.918224 kernel: clk: Disabling unused clocks Dec 13 13:07:15.918231 kernel: Freeing unused kernel memory: 39936K Dec 13 13:07:15.918239 kernel: Run /init as init process Dec 13 13:07:15.918246 kernel: with arguments: Dec 13 13:07:15.918253 kernel: /init Dec 13 13:07:15.918260 kernel: with environment: Dec 13 13:07:15.918266 kernel: HOME=/ Dec 13 13:07:15.918273 kernel: TERM=linux Dec 13 13:07:15.918280 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:07:15.918288 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:07:15.918299 systemd[1]: Detected virtualization kvm. Dec 13 13:07:15.918307 systemd[1]: Detected architecture arm64. Dec 13 13:07:15.918314 systemd[1]: Running in initrd. Dec 13 13:07:15.918321 systemd[1]: No hostname configured, using default hostname. Dec 13 13:07:15.918328 systemd[1]: Hostname set to . Dec 13 13:07:15.918336 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:07:15.918344 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:07:15.918351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:07:15.918360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:07:15.918368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:07:15.918375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:07:15.918383 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:07:15.918391 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:07:15.918399 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:07:15.918409 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:07:15.918416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:07:15.918424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:07:15.918431 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:07:15.918439 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:07:15.918446 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:07:15.918453 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:07:15.918461 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:07:15.918483 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:07:15.918492 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:07:15.918499 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:07:15.918507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:07:15.918523 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:07:15.918531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:07:15.918539 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:07:15.918546 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:07:15.918554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:07:15.918563 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:07:15.918571 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:07:15.918578 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:07:15.918586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:07:15.918593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:07:15.918601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:07:15.918608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:07:15.918616 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:07:15.918625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:07:15.918633 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:15.918640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:07:15.918648 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:07:15.918673 systemd-journald[238]: Collecting audit messages is disabled. Dec 13 13:07:15.918692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:07:15.918700 kernel: Bridge firewalling registered Dec 13 13:07:15.918707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:07:15.918716 systemd-journald[238]: Journal started Dec 13 13:07:15.919179 systemd-journald[238]: Runtime Journal (/run/log/journal/083920264d434a53a443c42625f65cda) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:07:15.896678 systemd-modules-load[239]: Inserted module 'overlay' Dec 13 13:07:15.921560 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:07:15.914666 systemd-modules-load[239]: Inserted module 'br_netfilter' Dec 13 13:07:15.922651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:07:15.929913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:07:15.931892 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:07:15.933506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:07:15.935596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:07:15.939621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:07:15.943425 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:07:15.945303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:07:15.948048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:07:15.957339 dracut-cmdline[275]: dracut-dracut-053 Dec 13 13:07:15.959812 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:07:15.985960 systemd-resolved[277]: Positive Trust Anchors: Dec 13 13:07:15.985975 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:07:15.986006 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:07:15.990590 systemd-resolved[277]: Defaulting to hostname 'linux'. Dec 13 13:07:15.991487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:07:15.995921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:07:16.039774 kernel: SCSI subsystem initialized Dec 13 13:07:16.044760 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:07:16.051769 kernel: iscsi: registered transport (tcp) Dec 13 13:07:16.064771 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:07:16.064788 kernel: QLogic iSCSI HBA Driver Dec 13 13:07:16.104535 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:07:16.115876 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:07:16.131819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:07:16.131873 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:07:16.131884 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:07:16.181778 kernel: raid6: neonx8 gen() 15766 MB/s Dec 13 13:07:16.198792 kernel: raid6: neonx4 gen() 15760 MB/s Dec 13 13:07:16.215842 kernel: raid6: neonx2 gen() 13152 MB/s Dec 13 13:07:16.232796 kernel: raid6: neonx1 gen() 10527 MB/s Dec 13 13:07:16.251553 kernel: raid6: int64x8 gen() 6782 MB/s Dec 13 13:07:16.266791 kernel: raid6: int64x4 gen() 7344 MB/s Dec 13 13:07:16.283790 kernel: raid6: int64x2 gen() 6104 MB/s Dec 13 13:07:16.300849 kernel: raid6: int64x1 gen() 5046 MB/s Dec 13 13:07:16.300884 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Dec 13 13:07:16.318817 kernel: raid6: .... xor() 11883 MB/s, rmw enabled Dec 13 13:07:16.318853 kernel: raid6: using neon recovery algorithm Dec 13 13:07:16.323773 kernel: xor: measuring software checksum speed Dec 13 13:07:16.324924 kernel: 8regs : 19182 MB/sec Dec 13 13:07:16.324938 kernel: 32regs : 21687 MB/sec Dec 13 13:07:16.327848 kernel: arm64_neon : 1737 MB/sec Dec 13 13:07:16.327875 kernel: xor: using function: 32regs (21687 MB/sec) Dec 13 13:07:16.377771 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:07:16.388285 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:07:16.398896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:07:16.409989 systemd-udevd[462]: Using default interface naming scheme 'v255'. Dec 13 13:07:16.412994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:07:16.416169 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:07:16.429372 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 13 13:07:16.454803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:07:16.466925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:07:16.503046 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:07:16.509873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:07:16.521883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:07:16.524047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:07:16.526964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:07:16.529134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:07:16.534904 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:07:16.543157 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:07:16.558764 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:07:16.572689 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:07:16.572816 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:07:16.572830 kernel: GPT:9289727 != 19775487 Dec 13 13:07:16.572839 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:07:16.572848 kernel: GPT:9289727 != 19775487 Dec 13 13:07:16.572856 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:07:16.572865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:07:16.563703 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:07:16.563819 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:07:16.566440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:07:16.568444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:07:16.568578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:16.571965 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:07:16.583248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:07:16.590801 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) Dec 13 13:07:16.590848 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (512) Dec 13 13:07:16.597231 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:07:16.598622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:16.604620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:07:16.614180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:07:16.617962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:07:16.619125 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:07:16.633868 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:07:16.635595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:07:16.641441 disk-uuid[552]: Primary Header is updated. Dec 13 13:07:16.641441 disk-uuid[552]: Secondary Entries is updated. Dec 13 13:07:16.641441 disk-uuid[552]: Secondary Header is updated. Dec 13 13:07:16.650606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:07:16.657358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:07:17.655324 disk-uuid[553]: The operation has completed successfully. Dec 13 13:07:17.656452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:07:17.680197 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:07:17.680302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:07:17.699961 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:07:17.703865 sh[573]: Success Dec 13 13:07:17.720773 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:07:17.757133 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:07:17.758977 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:07:17.759906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:07:17.771582 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:07:17.771616 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:07:17.771626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:07:17.772343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:07:17.772358 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:07:17.776069 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:07:17.777315 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:07:17.778059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:07:17.780689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:07:17.791570 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:07:17.791617 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:07:17.791634 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:07:17.794830 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:07:17.801083 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:07:17.803128 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:07:17.807793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:07:17.813913 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:07:17.873711 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:07:17.885651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:07:17.906105 systemd-networkd[768]: lo: Link UP Dec 13 13:07:17.906120 systemd-networkd[768]: lo: Gained carrier Dec 13 13:07:17.906983 systemd-networkd[768]: Enumeration completed Dec 13 13:07:17.907369 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:07:17.908431 ignition[668]: Ignition 2.20.0 Dec 13 13:07:17.908649 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:17.908437 ignition[668]: Stage: fetch-offline Dec 13 13:07:17.908653 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:07:17.908467 ignition[668]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:17.909287 systemd[1]: Reached target network.target - Network. Dec 13 13:07:17.908475 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:17.909644 systemd-networkd[768]: eth0: Link UP Dec 13 13:07:17.908629 ignition[668]: parsed url from cmdline: "" Dec 13 13:07:17.909647 systemd-networkd[768]: eth0: Gained carrier Dec 13 13:07:17.908632 ignition[668]: no config URL provided Dec 13 13:07:17.909653 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:17.908636 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:07:17.908648 ignition[668]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:07:17.908674 ignition[668]: op(1): [started] loading QEMU firmware config module Dec 13 13:07:17.908678 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:07:17.917074 ignition[668]: op(1): [finished] loading QEMU firmware config module Dec 13 13:07:17.917095 ignition[668]: QEMU firmware config was not found. Ignoring... Dec 13 13:07:17.930788 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:07:17.947101 ignition[668]: parsing config with SHA512: 1281d7755144af71d0a6ce6fd6c6fe1cb0bf74360bae04001daf9130cab74abc2a4d5ea69ac4f0db340574057a6d2b81df0369c2da930e9843f67284ba4bafbb Dec 13 13:07:17.953961 unknown[668]: fetched base config from "system" Dec 13 13:07:17.953970 unknown[668]: fetched user config from "qemu" Dec 13 13:07:17.954415 ignition[668]: fetch-offline: fetch-offline passed Dec 13 13:07:17.957170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:07:17.954494 ignition[668]: Ignition finished successfully Dec 13 13:07:17.958542 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:07:17.964898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:07:17.975101 ignition[774]: Ignition 2.20.0 Dec 13 13:07:17.975112 ignition[774]: Stage: kargs Dec 13 13:07:17.975281 ignition[774]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:17.975290 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:17.976202 ignition[774]: kargs: kargs passed Dec 13 13:07:17.978598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:07:17.976243 ignition[774]: Ignition finished successfully Dec 13 13:07:17.983883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:07:17.994492 ignition[783]: Ignition 2.20.0 Dec 13 13:07:17.994501 ignition[783]: Stage: disks Dec 13 13:07:17.994655 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:17.994665 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:17.995476 ignition[783]: disks: disks passed Dec 13 13:07:17.995531 ignition[783]: Ignition finished successfully Dec 13 13:07:17.998769 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:07:18.000012 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:07:18.001661 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:07:18.003542 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:07:18.005227 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:07:18.006921 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:07:18.020880 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:07:18.030110 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:07:18.033266 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:07:18.035909 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:07:18.079757 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:07:18.080410 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:07:18.081687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:07:18.098824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:07:18.100496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:07:18.101862 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:07:18.101904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:07:18.109101 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Dec 13 13:07:18.109122 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:07:18.101926 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:07:18.113542 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:07:18.113561 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:07:18.106431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:07:18.108045 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:07:18.116764 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:07:18.118176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:07:18.151955 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:07:18.155963 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:07:18.159872 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:07:18.163371 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:07:18.244815 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:07:18.253847 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:07:18.256132 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:07:18.260762 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:07:18.277204 ignition[915]: INFO : Ignition 2.20.0 Dec 13 13:07:18.277204 ignition[915]: INFO : Stage: mount Dec 13 13:07:18.279015 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:18.279015 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:18.279015 ignition[915]: INFO : mount: mount passed Dec 13 13:07:18.279015 ignition[915]: INFO : Ignition finished successfully Dec 13 13:07:18.279574 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:07:18.290865 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:07:18.291819 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:07:18.769848 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:07:18.778877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:07:18.785393 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Dec 13 13:07:18.785425 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:07:18.785436 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:07:18.786928 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:07:18.788761 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:07:18.789976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:07:18.810072 ignition[947]: INFO : Ignition 2.20.0 Dec 13 13:07:18.810072 ignition[947]: INFO : Stage: files Dec 13 13:07:18.811690 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:18.811690 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:18.811690 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:07:18.814760 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:07:18.814760 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:07:18.817726 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:07:18.819209 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:07:18.820861 unknown[947]: wrote ssh authorized keys file for user: core Dec 13 13:07:18.822088 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:07:18.824182 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:07:18.826085 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:07:18.882771 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:07:19.110946 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:07:19.112799 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:07:19.114491 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:07:19.125676 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:07:19.495660 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:07:19.893261 systemd-networkd[768]: eth0: Gained IPv6LL Dec 13 13:07:20.081856 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:07:20.081856 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:07:20.085621 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:07:20.105480 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:07:20.109245 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:07:20.110768 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:07:20.110768 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:07:20.110768 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:07:20.110768 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:07:20.110768 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:07:20.110768 ignition[947]: INFO : files: files passed Dec 13 13:07:20.110768 ignition[947]: INFO : Ignition finished successfully Dec 13 13:07:20.113468 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:07:20.124884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:07:20.126584 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:07:20.128170 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:07:20.128249 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:07:20.134266 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:07:20.135974 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:07:20.135974 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:07:20.139183 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:07:20.138720 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:07:20.140530 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:07:20.154900 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:07:20.173343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:07:20.173466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:07:20.175558 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:07:20.177354 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:07:20.179126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:07:20.179865 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:07:20.194580 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:07:20.207956 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:07:20.216575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:07:20.217806 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:07:20.219772 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:07:20.221598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:07:20.221711 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:07:20.224105 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:07:20.226033 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:07:20.227752 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:07:20.229468 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:07:20.231335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:07:20.233113 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:07:20.234624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:07:20.236525 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:07:20.238397 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:07:20.240014 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:07:20.241486 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:07:20.241621 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:07:20.243906 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:07:20.245776 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:07:20.247621 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:07:20.250805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:07:20.252040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:07:20.252152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:07:20.254885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:07:20.254999 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:07:20.256964 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:07:20.258577 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:07:20.265799 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:07:20.267049 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:07:20.269114 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:07:20.270636 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:07:20.270784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:07:20.272256 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:07:20.272379 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:07:20.274052 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:07:20.274272 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:07:20.275824 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:07:20.275972 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:07:20.294079 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:07:20.295676 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:07:20.296486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:07:20.296667 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:07:20.298568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:07:20.298704 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:07:20.305474 ignition[1003]: INFO : Ignition 2.20.0 Dec 13 13:07:20.305474 ignition[1003]: INFO : Stage: umount Dec 13 13:07:20.307874 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:07:20.307874 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:07:20.307874 ignition[1003]: INFO : umount: umount passed Dec 13 13:07:20.307874 ignition[1003]: INFO : Ignition finished successfully Dec 13 13:07:20.306398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:07:20.307780 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:07:20.309886 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:07:20.310309 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:07:20.310386 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:07:20.312466 systemd[1]: Stopped target network.target - Network. Dec 13 13:07:20.313513 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:07:20.313576 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:07:20.315316 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:07:20.315359 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:07:20.317185 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:07:20.317236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:07:20.319147 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:07:20.319193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:07:20.321043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:07:20.322695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:07:20.324701 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:07:20.324803 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:07:20.326550 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:07:20.326627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:07:20.328155 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:07:20.328256 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:07:20.330928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:07:20.330978 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:07:20.331796 systemd-networkd[768]: eth0: DHCPv6 lease lost Dec 13 13:07:20.333815 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:07:20.333917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:07:20.335522 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:07:20.335556 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:07:20.341947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:07:20.343663 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:07:20.343722 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:07:20.345560 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:07:20.345603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:07:20.347448 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:07:20.347493 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:07:20.349353 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:07:20.357419 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:07:20.357566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:07:20.359879 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:07:20.359957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:07:20.361258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:07:20.361299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:07:20.362903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:07:20.362935 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:07:20.364962 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:07:20.365011 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:07:20.367900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:07:20.367949 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:07:20.370591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:07:20.370637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:07:20.383908 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:07:20.385394 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:07:20.385474 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:07:20.387509 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:07:20.387555 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:07:20.389748 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:07:20.389791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:07:20.391922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:07:20.391967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:20.394081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:07:20.394180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:07:20.396381 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:07:20.407890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:07:20.413386 systemd[1]: Switching root. Dec 13 13:07:20.436363 systemd-journald[238]: Journal stopped Dec 13 13:07:21.122232 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Dec 13 13:07:21.122282 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:07:21.122294 kernel: SELinux: policy capability open_perms=1 Dec 13 13:07:21.122304 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:07:21.122313 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:07:21.122326 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:07:21.122350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:07:21.122359 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:07:21.122372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:07:21.122382 kernel: audit: type=1403 audit(1734095240.573:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:07:21.122392 systemd[1]: Successfully loaded SELinux policy in 32.810ms. Dec 13 13:07:21.122410 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.851ms. Dec 13 13:07:21.122421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:07:21.122432 systemd[1]: Detected virtualization kvm. Dec 13 13:07:21.122445 systemd[1]: Detected architecture arm64. Dec 13 13:07:21.122455 systemd[1]: Detected first boot. Dec 13 13:07:21.122466 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:07:21.122476 zram_generator::config[1047]: No configuration found. Dec 13 13:07:21.122486 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:07:21.122508 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:07:21.122521 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:07:21.122532 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:07:21.122545 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:07:21.122555 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:07:21.122565 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:07:21.122575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:07:21.122584 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:07:21.122594 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:07:21.122604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:07:21.122614 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:07:21.122625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:07:21.122636 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:07:21.122646 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:07:21.122656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:07:21.122666 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:07:21.122676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:07:21.122686 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:07:21.122696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:07:21.122706 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:07:21.122716 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:07:21.122728 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:07:21.122738 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:07:21.122830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:07:21.122841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:07:21.122851 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:07:21.122861 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:07:21.122870 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:07:21.122882 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:07:21.122892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:07:21.122902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:07:21.122912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:07:21.122922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:07:21.122932 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:07:21.123005 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:07:21.123024 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:07:21.123035 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:07:21.123048 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:07:21.123058 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:07:21.123068 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:07:21.123078 systemd[1]: Reached target machines.target - Containers. Dec 13 13:07:21.123088 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:07:21.123102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:21.123112 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:07:21.123123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:07:21.123133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:21.123145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:07:21.123155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:21.123165 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:07:21.123175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:21.123185 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:07:21.123195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:07:21.123209 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:07:21.123219 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:07:21.123230 kernel: fuse: init (API version 7.39) Dec 13 13:07:21.123240 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:07:21.123250 kernel: loop: module loaded Dec 13 13:07:21.123259 kernel: ACPI: bus type drm_connector registered Dec 13 13:07:21.123268 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:07:21.123278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:07:21.123288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:07:21.123298 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:07:21.123308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:07:21.123339 systemd-journald[1118]: Collecting audit messages is disabled. Dec 13 13:07:21.123364 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:07:21.123374 systemd[1]: Stopped verity-setup.service. Dec 13 13:07:21.123384 systemd-journald[1118]: Journal started Dec 13 13:07:21.123408 systemd-journald[1118]: Runtime Journal (/run/log/journal/083920264d434a53a443c42625f65cda) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:07:20.922060 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:07:20.945317 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:07:20.945678 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:07:21.126600 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:07:21.127180 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:07:21.128296 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:07:21.129517 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:07:21.130657 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:07:21.131895 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:07:21.133076 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:07:21.134272 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:07:21.136782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:07:21.138241 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:07:21.138385 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:07:21.139798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:21.139933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:21.141288 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:07:21.141436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:07:21.142767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:21.142922 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:21.144353 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:07:21.144488 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:07:21.145820 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:21.145951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:21.148783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:07:21.150089 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:07:21.151657 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:07:21.163635 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:07:21.182936 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:07:21.185039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:07:21.186091 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:07:21.186130 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:07:21.188010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:07:21.190145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:07:21.192241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:07:21.193221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:21.194579 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:07:21.196886 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:07:21.198025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:07:21.199899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:07:21.200997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:07:21.202517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:07:21.207387 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:07:21.209627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:07:21.212908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:07:21.214224 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:07:21.215141 systemd-journald[1118]: Time spent on flushing to /var/log/journal/083920264d434a53a443c42625f65cda is 22.906ms for 863 entries. Dec 13 13:07:21.215141 systemd-journald[1118]: System Journal (/var/log/journal/083920264d434a53a443c42625f65cda) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:07:21.255385 systemd-journald[1118]: Received client request to flush runtime journal. Dec 13 13:07:21.255433 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:07:21.255451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:07:21.216984 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:07:21.218481 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:07:21.223052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:07:21.225461 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:07:21.231929 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:07:21.236238 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:07:21.237709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:07:21.250596 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:07:21.259964 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:07:21.263996 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Dec 13 13:07:21.264012 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Dec 13 13:07:21.266197 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:07:21.266833 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:07:21.269161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:07:21.276820 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:07:21.277016 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:07:21.298037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:07:21.305832 kernel: loop2: detected capacity change from 0 to 194096 Dec 13 13:07:21.314957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:07:21.327085 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 13:07:21.327105 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Dec 13 13:07:21.330554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:07:21.334866 kernel: loop3: detected capacity change from 0 to 116784 Dec 13 13:07:21.340753 kernel: loop4: detected capacity change from 0 to 113552 Dec 13 13:07:21.346752 kernel: loop5: detected capacity change from 0 to 194096 Dec 13 13:07:21.351668 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:07:21.352153 (sd-merge)[1184]: Merged extensions into '/usr'. Dec 13 13:07:21.355123 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:07:21.355227 systemd[1]: Reloading... Dec 13 13:07:21.407961 zram_generator::config[1208]: No configuration found. Dec 13 13:07:21.475539 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:07:21.500895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:21.535667 systemd[1]: Reloading finished in 180 ms. Dec 13 13:07:21.563775 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:07:21.565052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:07:21.579917 systemd[1]: Starting ensure-sysext.service... Dec 13 13:07:21.581691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:07:21.593388 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:07:21.593403 systemd[1]: Reloading... Dec 13 13:07:21.605705 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:07:21.605944 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:07:21.606553 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:07:21.606775 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 13:07:21.606819 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 13:07:21.609105 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:07:21.609118 systemd-tmpfiles[1247]: Skipping /boot Dec 13 13:07:21.616831 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:07:21.616845 systemd-tmpfiles[1247]: Skipping /boot Dec 13 13:07:21.636770 zram_generator::config[1270]: No configuration found. Dec 13 13:07:21.719222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:21.753975 systemd[1]: Reloading finished in 160 ms. Dec 13 13:07:21.771642 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:07:21.780240 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:07:21.787938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:07:21.790185 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:07:21.792674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:07:21.797576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:07:21.800319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:07:21.803373 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:07:21.807632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:21.810978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:21.813836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:21.819863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:21.821159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:21.821898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:21.822065 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:21.824337 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:21.824459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:21.826150 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:21.826264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:21.830653 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:07:21.838800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:21.843231 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Dec 13 13:07:21.848295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:21.851139 augenrules[1343]: No rules Dec 13 13:07:21.853091 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:21.855474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:21.856655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:21.860995 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:07:21.865132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:07:21.867384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:07:21.870121 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:07:21.870733 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:07:21.872415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:07:21.876144 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:07:21.877921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:21.878040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:21.880337 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:21.880469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:21.882275 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:21.882390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:21.884104 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:07:21.900384 systemd[1]: Finished ensure-sysext.service. Dec 13 13:07:21.902788 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:07:21.911957 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:07:21.912962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:07:21.914906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:07:21.919894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:07:21.924893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:07:21.926796 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Dec 13 13:07:21.930395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:07:21.931759 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1366) Dec 13 13:07:21.931976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:07:21.942043 augenrules[1379]: /sbin/augenrules: No change Dec 13 13:07:21.950714 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:07:21.957731 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:07:21.959241 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:07:21.959525 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:07:21.965808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1370) Dec 13 13:07:21.965260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:07:21.965399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:07:21.966676 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:07:21.966810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:07:21.967268 augenrules[1412]: No rules Dec 13 13:07:21.968036 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:07:21.968188 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:07:21.969417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:07:21.969560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:07:21.970976 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:07:21.971081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:07:21.990664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:07:21.990728 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:07:21.994705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:07:22.000573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:07:22.017582 systemd-resolved[1314]: Positive Trust Anchors: Dec 13 13:07:22.019578 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:07:22.019661 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:07:22.024309 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:07:22.030942 systemd-resolved[1314]: Defaulting to hostname 'linux'. Dec 13 13:07:22.033285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:07:22.034407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:07:22.045277 systemd-networkd[1403]: lo: Link UP Dec 13 13:07:22.045292 systemd-networkd[1403]: lo: Gained carrier Dec 13 13:07:22.045990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:07:22.046050 systemd-networkd[1403]: Enumeration completed Dec 13 13:07:22.047169 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:07:22.048269 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:22.048279 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:07:22.048657 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:07:22.049025 systemd-networkd[1403]: eth0: Link UP Dec 13 13:07:22.049035 systemd-networkd[1403]: eth0: Gained carrier Dec 13 13:07:22.049048 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:07:22.050455 systemd[1]: Reached target network.target - Network. Dec 13 13:07:22.051408 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:07:22.053754 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:07:22.064536 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:07:22.068254 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:07:22.072821 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:07:22.073531 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Dec 13 13:07:22.077565 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:07:22.077627 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2024-12-13 13:07:22.356802 UTC. Dec 13 13:07:22.099956 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:07:22.103588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:07:22.131180 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:07:22.132596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:07:22.133704 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:07:22.134696 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:07:22.135786 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:07:22.137116 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:07:22.138229 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:07:22.139425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:07:22.140606 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:07:22.140641 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:07:22.141589 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:07:22.143315 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:07:22.145618 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:07:22.159722 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:07:22.161929 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:07:22.163419 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:07:22.164655 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:07:22.165525 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:07:22.166469 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:07:22.166512 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:07:22.167372 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:07:22.169309 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:07:22.171884 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:07:22.174126 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:07:22.176601 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:07:22.177782 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:07:22.178813 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:07:22.180255 jq[1445]: false Dec 13 13:07:22.182852 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:07:22.187959 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:07:22.190702 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:07:22.192398 extend-filesystems[1446]: Found loop3 Dec 13 13:07:22.193385 extend-filesystems[1446]: Found loop4 Dec 13 13:07:22.194233 extend-filesystems[1446]: Found loop5 Dec 13 13:07:22.196761 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda1 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda2 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda3 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found usr Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda4 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda6 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda7 Dec 13 13:07:22.198930 extend-filesystems[1446]: Found vda9 Dec 13 13:07:22.198930 extend-filesystems[1446]: Checking size of /dev/vda9 Dec 13 13:07:22.203765 dbus-daemon[1444]: [system] SELinux support is enabled Dec 13 13:07:22.202144 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:07:22.202615 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:07:22.205911 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:07:22.209455 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:07:22.211731 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:07:22.215095 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:07:22.217925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:07:22.218083 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:07:22.218355 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:07:22.218481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:07:22.222716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:07:22.222881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:07:22.225837 extend-filesystems[1446]: Resized partition /dev/vda9 Dec 13 13:07:22.228959 jq[1463]: true Dec 13 13:07:22.243712 tar[1468]: linux-arm64/helm Dec 13 13:07:22.245819 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:07:22.245856 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:07:22.246756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1368) Dec 13 13:07:22.249130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:07:22.249155 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:07:22.255702 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:07:22.257844 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:07:22.265303 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:07:22.265358 jq[1470]: true Dec 13 13:07:22.271716 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:07:22.272026 systemd-logind[1457]: New seat seat0. Dec 13 13:07:22.275487 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:07:22.281454 update_engine[1462]: I20241213 13:07:22.281258 1462 main.cc:92] Flatcar Update Engine starting Dec 13 13:07:22.283985 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:07:22.287376 update_engine[1462]: I20241213 13:07:22.286900 1462 update_check_scheduler.cc:74] Next update check in 10m38s Dec 13 13:07:22.287487 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:07:22.294308 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:07:22.339026 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:07:22.339026 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:07:22.339026 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:07:22.337868 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:07:22.346177 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:07:22.346249 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Dec 13 13:07:22.338055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:07:22.347317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:07:22.349546 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:07:22.350390 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:07:22.463786 containerd[1472]: time="2024-12-13T13:07:22.463291600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:07:22.492179 containerd[1472]: time="2024-12-13T13:07:22.492025600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493388 containerd[1472]: time="2024-12-13T13:07:22.493351640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493388 containerd[1472]: time="2024-12-13T13:07:22.493386200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:07:22.493457 containerd[1472]: time="2024-12-13T13:07:22.493402880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:07:22.493576 containerd[1472]: time="2024-12-13T13:07:22.493556480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:07:22.493607 containerd[1472]: time="2024-12-13T13:07:22.493579000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493652 containerd[1472]: time="2024-12-13T13:07:22.493635760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493675 containerd[1472]: time="2024-12-13T13:07:22.493652160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493849 containerd[1472]: time="2024-12-13T13:07:22.493828480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493849 containerd[1472]: time="2024-12-13T13:07:22.493848120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493903 containerd[1472]: time="2024-12-13T13:07:22.493860960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493903 containerd[1472]: time="2024-12-13T13:07:22.493869120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.493951 containerd[1472]: time="2024-12-13T13:07:22.493941240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.494157 containerd[1472]: time="2024-12-13T13:07:22.494118360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:07:22.494231 containerd[1472]: time="2024-12-13T13:07:22.494214960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:07:22.494261 containerd[1472]: time="2024-12-13T13:07:22.494230360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:07:22.494320 containerd[1472]: time="2024-12-13T13:07:22.494305320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:07:22.494396 containerd[1472]: time="2024-12-13T13:07:22.494349960Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:07:22.498290 containerd[1472]: time="2024-12-13T13:07:22.498260280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:07:22.498346 containerd[1472]: time="2024-12-13T13:07:22.498316360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:07:22.498346 containerd[1472]: time="2024-12-13T13:07:22.498332000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:07:22.498390 containerd[1472]: time="2024-12-13T13:07:22.498347320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:07:22.498390 containerd[1472]: time="2024-12-13T13:07:22.498360040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:07:22.498531 containerd[1472]: time="2024-12-13T13:07:22.498509440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:07:22.498746 containerd[1472]: time="2024-12-13T13:07:22.498727120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:07:22.498867 containerd[1472]: time="2024-12-13T13:07:22.498845800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:07:22.498895 containerd[1472]: time="2024-12-13T13:07:22.498868480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:07:22.498895 containerd[1472]: time="2024-12-13T13:07:22.498889400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:07:22.498937 containerd[1472]: time="2024-12-13T13:07:22.498904000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498937 containerd[1472]: time="2024-12-13T13:07:22.498916680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498937 containerd[1472]: time="2024-12-13T13:07:22.498928560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498994 containerd[1472]: time="2024-12-13T13:07:22.498940760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498994 containerd[1472]: time="2024-12-13T13:07:22.498954440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498994 containerd[1472]: time="2024-12-13T13:07:22.498966680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498994 containerd[1472]: time="2024-12-13T13:07:22.498977720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.498994 containerd[1472]: time="2024-12-13T13:07:22.498990160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499008880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499021560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499033200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499044200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499055280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499078 containerd[1472]: time="2024-12-13T13:07:22.499071160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499082400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499094760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499106600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499119640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499131440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499142880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499154400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499174 containerd[1472]: time="2024-12-13T13:07:22.499167640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:07:22.499297 containerd[1472]: time="2024-12-13T13:07:22.499186080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499297 containerd[1472]: time="2024-12-13T13:07:22.499199320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499297 containerd[1472]: time="2024-12-13T13:07:22.499210240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:07:22.499431 containerd[1472]: time="2024-12-13T13:07:22.499369200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:07:22.499431 containerd[1472]: time="2024-12-13T13:07:22.499391920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:07:22.499431 containerd[1472]: time="2024-12-13T13:07:22.499404280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:07:22.499431 containerd[1472]: time="2024-12-13T13:07:22.499416000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:07:22.499431 containerd[1472]: time="2024-12-13T13:07:22.499425920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499590 containerd[1472]: time="2024-12-13T13:07:22.499438160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:07:22.499590 containerd[1472]: time="2024-12-13T13:07:22.499446880Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:07:22.499590 containerd[1472]: time="2024-12-13T13:07:22.499455880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:07:22.499848 containerd[1472]: time="2024-12-13T13:07:22.499800520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:07:22.499848 containerd[1472]: time="2024-12-13T13:07:22.499852240Z" level=info msg="Connect containerd service" Dec 13 13:07:22.499991 containerd[1472]: time="2024-12-13T13:07:22.499884840Z" level=info msg="using legacy CRI server" Dec 13 13:07:22.499991 containerd[1472]: time="2024-12-13T13:07:22.499891520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:07:22.500122 containerd[1472]: time="2024-12-13T13:07:22.500104600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:07:22.500863 containerd[1472]: time="2024-12-13T13:07:22.500832480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501099160Z" level=info msg="Start subscribing containerd event" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501152600Z" level=info msg="Start recovering state" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501218280Z" level=info msg="Start event monitor" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501229120Z" level=info msg="Start snapshots syncer" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501238760Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:07:22.501594 containerd[1472]: time="2024-12-13T13:07:22.501247160Z" level=info msg="Start streaming server" Dec 13 13:07:22.501867 containerd[1472]: time="2024-12-13T13:07:22.501838040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:07:22.503125 containerd[1472]: time="2024-12-13T13:07:22.503081960Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:07:22.503261 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:07:22.504438 containerd[1472]: time="2024-12-13T13:07:22.504398960Z" level=info msg="containerd successfully booted in 0.043730s" Dec 13 13:07:22.636883 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:07:22.637574 tar[1468]: linux-arm64/LICENSE Dec 13 13:07:22.637574 tar[1468]: linux-arm64/README.md Dec 13 13:07:22.650802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:07:22.654671 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:07:22.657234 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:07:22.665808 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:07:22.665991 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:07:22.668385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:07:22.678045 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:07:22.680958 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:07:22.682915 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:07:22.684157 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:07:23.222154 systemd-networkd[1403]: eth0: Gained IPv6LL Dec 13 13:07:23.225053 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:07:23.226820 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:07:23.242002 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:07:23.244420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:23.246498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:07:23.261183 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:07:23.262197 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:07:23.264082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:07:23.266261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:07:23.766524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:23.768135 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:07:23.770011 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:07:23.772959 systemd[1]: Startup finished in 552ms (kernel) + 4.862s (initrd) + 3.234s (userspace) = 8.649s. Dec 13 13:07:23.781104 agetty[1536]: failed to open credentials directory Dec 13 13:07:23.781250 agetty[1535]: failed to open credentials directory Dec 13 13:07:24.226984 kubelet[1559]: E1213 13:07:24.226835 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:07:24.229341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:07:24.229506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:07:28.400362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:07:28.401458 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Dec 13 13:07:28.468729 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:28.470538 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:28.483139 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:07:28.497239 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:07:28.499931 systemd-logind[1457]: New session 1 of user core. Dec 13 13:07:28.507499 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:07:28.509939 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:07:28.516595 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:07:28.601552 systemd[1577]: Queued start job for default target default.target. Dec 13 13:07:28.610661 systemd[1577]: Created slice app.slice - User Application Slice. Dec 13 13:07:28.610705 systemd[1577]: Reached target paths.target - Paths. Dec 13 13:07:28.610717 systemd[1577]: Reached target timers.target - Timers. Dec 13 13:07:28.611932 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:07:28.622712 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:07:28.622785 systemd[1577]: Reached target sockets.target - Sockets. Dec 13 13:07:28.622797 systemd[1577]: Reached target basic.target - Basic System. Dec 13 13:07:28.622832 systemd[1577]: Reached target default.target - Main User Target. Dec 13 13:07:28.622857 systemd[1577]: Startup finished in 100ms. Dec 13 13:07:28.623285 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:07:28.625416 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:07:28.688032 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Dec 13 13:07:28.734770 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:28.736062 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:28.739822 systemd-logind[1457]: New session 2 of user core. Dec 13 13:07:28.748974 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:07:28.800334 sshd[1590]: Connection closed by 10.0.0.1 port 37628 Dec 13 13:07:28.800801 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:28.809132 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:37628.service: Deactivated successfully. Dec 13 13:07:28.812011 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:07:28.813210 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:07:28.814327 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:37636.service - OpenSSH per-connection server daemon (10.0.0.1:37636). Dec 13 13:07:28.815031 systemd-logind[1457]: Removed session 2. Dec 13 13:07:28.859387 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 37636 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:28.860559 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:28.864434 systemd-logind[1457]: New session 3 of user core. Dec 13 13:07:28.871914 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:07:28.921114 sshd[1597]: Connection closed by 10.0.0.1 port 37636 Dec 13 13:07:28.921556 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:28.934080 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:37636.service: Deactivated successfully. Dec 13 13:07:28.935419 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:07:28.937780 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:07:28.938988 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:37638.service - OpenSSH per-connection server daemon (10.0.0.1:37638). Dec 13 13:07:28.940780 systemd-logind[1457]: Removed session 3. Dec 13 13:07:28.984379 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 37638 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:28.985588 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:28.989107 systemd-logind[1457]: New session 4 of user core. Dec 13 13:07:29.005893 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:07:29.057686 sshd[1604]: Connection closed by 10.0.0.1 port 37638 Dec 13 13:07:29.057575 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:29.076041 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:37638.service: Deactivated successfully. Dec 13 13:07:29.077512 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:07:29.079885 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:07:29.085997 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:37642.service - OpenSSH per-connection server daemon (10.0.0.1:37642). Dec 13 13:07:29.086778 systemd-logind[1457]: Removed session 4. Dec 13 13:07:29.127435 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 37642 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:07:29.128507 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:07:29.132454 systemd-logind[1457]: New session 5 of user core. Dec 13 13:07:29.141896 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:07:29.199599 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:07:29.201900 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:07:29.528082 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:07:29.528174 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:07:29.773854 dockerd[1633]: time="2024-12-13T13:07:29.773799205Z" level=info msg="Starting up" Dec 13 13:07:29.918699 dockerd[1633]: time="2024-12-13T13:07:29.918534578Z" level=info msg="Loading containers: start." Dec 13 13:07:30.053845 kernel: Initializing XFRM netlink socket Dec 13 13:07:30.124164 systemd-networkd[1403]: docker0: Link UP Dec 13 13:07:30.160041 dockerd[1633]: time="2024-12-13T13:07:30.160004576Z" level=info msg="Loading containers: done." Dec 13 13:07:30.174612 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck622102037-merged.mount: Deactivated successfully. Dec 13 13:07:30.177751 dockerd[1633]: time="2024-12-13T13:07:30.177244898Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:07:30.177751 dockerd[1633]: time="2024-12-13T13:07:30.177338443Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:07:30.177751 dockerd[1633]: time="2024-12-13T13:07:30.177499542Z" level=info msg="Daemon has completed initialization" Dec 13 13:07:30.203404 dockerd[1633]: time="2024-12-13T13:07:30.203339143Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:07:30.203513 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:07:30.935694 containerd[1472]: time="2024-12-13T13:07:30.935649725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:07:31.550209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118473695.mount: Deactivated successfully. Dec 13 13:07:32.783206 containerd[1472]: time="2024-12-13T13:07:32.783143721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:32.783637 containerd[1472]: time="2024-12-13T13:07:32.783605559Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Dec 13 13:07:32.784628 containerd[1472]: time="2024-12-13T13:07:32.784598515Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:32.787749 containerd[1472]: time="2024-12-13T13:07:32.787707778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:32.789043 containerd[1472]: time="2024-12-13T13:07:32.788820417Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 1.853131389s" Dec 13 13:07:32.789043 containerd[1472]: time="2024-12-13T13:07:32.788853400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 13:07:32.807084 containerd[1472]: time="2024-12-13T13:07:32.807043258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:07:34.174995 containerd[1472]: time="2024-12-13T13:07:34.174929922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:34.175518 containerd[1472]: time="2024-12-13T13:07:34.175471561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Dec 13 13:07:34.176308 containerd[1472]: time="2024-12-13T13:07:34.176279200Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:34.179333 containerd[1472]: time="2024-12-13T13:07:34.179295330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:34.180297 containerd[1472]: time="2024-12-13T13:07:34.180262522Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.373176319s" Dec 13 13:07:34.180343 containerd[1472]: time="2024-12-13T13:07:34.180299456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 13:07:34.199828 containerd[1472]: time="2024-12-13T13:07:34.199773253Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:07:34.480123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:07:34.490967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:34.585761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:34.589307 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:07:34.637028 kubelet[1916]: E1213 13:07:34.636973 1916 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:07:34.640160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:07:34.640304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:07:35.219410 containerd[1472]: time="2024-12-13T13:07:35.219352839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:35.220210 containerd[1472]: time="2024-12-13T13:07:35.220174256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Dec 13 13:07:35.220894 containerd[1472]: time="2024-12-13T13:07:35.220864033Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:35.224082 containerd[1472]: time="2024-12-13T13:07:35.224045533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:35.224701 containerd[1472]: time="2024-12-13T13:07:35.224668362Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.024841409s" Dec 13 13:07:35.224701 containerd[1472]: time="2024-12-13T13:07:35.224697164Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 13:07:35.243829 containerd[1472]: time="2024-12-13T13:07:35.243785515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:07:36.177801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494095832.mount: Deactivated successfully. Dec 13 13:07:36.470826 containerd[1472]: time="2024-12-13T13:07:36.470698174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:36.471517 containerd[1472]: time="2024-12-13T13:07:36.471465990Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Dec 13 13:07:36.472457 containerd[1472]: time="2024-12-13T13:07:36.472419904Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:36.474841 containerd[1472]: time="2024-12-13T13:07:36.474797204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:36.475448 containerd[1472]: time="2024-12-13T13:07:36.475362462Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.231535746s" Dec 13 13:07:36.475448 containerd[1472]: time="2024-12-13T13:07:36.475387133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:07:36.493084 containerd[1472]: time="2024-12-13T13:07:36.493054189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:07:37.017251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296202155.mount: Deactivated successfully. Dec 13 13:07:37.534615 containerd[1472]: time="2024-12-13T13:07:37.534566322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.535354 containerd[1472]: time="2024-12-13T13:07:37.535298161Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 13:07:37.536032 containerd[1472]: time="2024-12-13T13:07:37.535998593Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.539277 containerd[1472]: time="2024-12-13T13:07:37.539240717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.540269 containerd[1472]: time="2024-12-13T13:07:37.540234881Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.047139406s" Dec 13 13:07:37.540318 containerd[1472]: time="2024-12-13T13:07:37.540272121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:07:37.557933 containerd[1472]: time="2024-12-13T13:07:37.557734285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:07:37.951624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667404525.mount: Deactivated successfully. Dec 13 13:07:37.955700 containerd[1472]: time="2024-12-13T13:07:37.955643129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.956175 containerd[1472]: time="2024-12-13T13:07:37.956119078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 13:07:37.956915 containerd[1472]: time="2024-12-13T13:07:37.956884457Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.959029 containerd[1472]: time="2024-12-13T13:07:37.958992990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:37.960557 containerd[1472]: time="2024-12-13T13:07:37.960518601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 402.730307ms" Dec 13 13:07:37.960598 containerd[1472]: time="2024-12-13T13:07:37.960559983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:07:37.978861 containerd[1472]: time="2024-12-13T13:07:37.978821428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:07:38.506408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73943395.mount: Deactivated successfully. Dec 13 13:07:40.204670 containerd[1472]: time="2024-12-13T13:07:40.204624296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:40.206148 containerd[1472]: time="2024-12-13T13:07:40.206100241Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Dec 13 13:07:40.207782 containerd[1472]: time="2024-12-13T13:07:40.207266673Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:40.210328 containerd[1472]: time="2024-12-13T13:07:40.210258467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:07:40.211932 containerd[1472]: time="2024-12-13T13:07:40.211801132Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.2329385s" Dec 13 13:07:40.211932 containerd[1472]: time="2024-12-13T13:07:40.211835134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 13:07:44.354753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:44.365988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:44.385012 systemd[1]: Reloading requested from client PID 2134 ('systemctl') (unit session-5.scope)... Dec 13 13:07:44.385033 systemd[1]: Reloading... Dec 13 13:07:44.441868 zram_generator::config[2174]: No configuration found. Dec 13 13:07:44.542939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:44.594079 systemd[1]: Reloading finished in 208 ms. Dec 13 13:07:44.631736 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:44.634508 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:07:44.634714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:44.636282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:44.725129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:44.728797 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:07:44.771111 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:44.771111 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:07:44.771111 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:44.771993 kubelet[2220]: I1213 13:07:44.771942 2220 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:07:45.844823 kubelet[2220]: I1213 13:07:45.844779 2220 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:07:45.844823 kubelet[2220]: I1213 13:07:45.844811 2220 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:07:45.845214 kubelet[2220]: I1213 13:07:45.845037 2220 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:07:45.905146 kubelet[2220]: I1213 13:07:45.904720 2220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:07:45.905146 kubelet[2220]: E1213 13:07:45.904998 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.914105 kubelet[2220]: I1213 13:07:45.914077 2220 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:07:45.915395 kubelet[2220]: I1213 13:07:45.915347 2220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:07:45.915662 kubelet[2220]: I1213 13:07:45.915490 2220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:07:45.915879 kubelet[2220]: I1213 13:07:45.915862 2220 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:07:45.915952 kubelet[2220]: I1213 13:07:45.915942 2220 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:07:45.916506 kubelet[2220]: I1213 13:07:45.916257 2220 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:45.917105 kubelet[2220]: I1213 13:07:45.917085 2220 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:07:45.917184 kubelet[2220]: I1213 13:07:45.917173 2220 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:07:45.917522 kubelet[2220]: I1213 13:07:45.917510 2220 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:07:45.917580 kubelet[2220]: I1213 13:07:45.917570 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:07:45.917905 kubelet[2220]: W1213 13:07:45.917832 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.917905 kubelet[2220]: E1213 13:07:45.917901 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.918240 kubelet[2220]: W1213 13:07:45.918146 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.918240 kubelet[2220]: E1213 13:07:45.918196 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.918822 kubelet[2220]: I1213 13:07:45.918804 2220 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:07:45.919233 kubelet[2220]: I1213 13:07:45.919217 2220 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:07:45.919883 kubelet[2220]: W1213 13:07:45.919402 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:07:45.920777 kubelet[2220]: I1213 13:07:45.920192 2220 server.go:1264] "Started kubelet" Dec 13 13:07:45.922986 kubelet[2220]: I1213 13:07:45.922895 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:07:45.923104 kubelet[2220]: I1213 13:07:45.923045 2220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:07:45.924230 kubelet[2220]: I1213 13:07:45.923368 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:07:45.924230 kubelet[2220]: I1213 13:07:45.923620 2220 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:07:45.924230 kubelet[2220]: E1213 13:07:45.922737 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810be74ebdba637 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:07:45.920165431 +0000 UTC m=+1.188265343,LastTimestamp:2024-12-13 13:07:45.920165431 +0000 UTC m=+1.188265343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:07:45.924230 kubelet[2220]: E1213 13:07:45.923951 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:45.924230 kubelet[2220]: I1213 13:07:45.923986 2220 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:07:45.924230 kubelet[2220]: I1213 13:07:45.924041 2220 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:07:45.924230 kubelet[2220]: I1213 13:07:45.924137 2220 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:07:45.926458 kubelet[2220]: I1213 13:07:45.926431 2220 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:07:45.926577 kubelet[2220]: E1213 13:07:45.926547 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Dec 13 13:07:45.926949 kubelet[2220]: W1213 13:07:45.926906 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.927026 kubelet[2220]: E1213 13:07:45.926962 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.928157 kubelet[2220]: E1213 13:07:45.928122 2220 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:07:45.930662 kubelet[2220]: I1213 13:07:45.930640 2220 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:07:45.930787 kubelet[2220]: I1213 13:07:45.930775 2220 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:07:45.930993 kubelet[2220]: I1213 13:07:45.930975 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:07:45.937788 kubelet[2220]: I1213 13:07:45.937725 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:07:45.938616 kubelet[2220]: I1213 13:07:45.938584 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:07:45.938754 kubelet[2220]: I1213 13:07:45.938729 2220 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:07:45.938804 kubelet[2220]: I1213 13:07:45.938755 2220 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:07:45.938804 kubelet[2220]: E1213 13:07:45.938796 2220 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:07:45.944065 kubelet[2220]: W1213 13:07:45.944038 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.944465 kubelet[2220]: E1213 13:07:45.944168 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:45.944956 kubelet[2220]: I1213 13:07:45.944939 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:07:45.944956 kubelet[2220]: I1213 13:07:45.944954 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:07:45.945031 kubelet[2220]: I1213 13:07:45.944971 2220 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:45.948616 kubelet[2220]: I1213 13:07:45.948591 2220 policy_none.go:49] "None policy: Start" Dec 13 13:07:45.949062 kubelet[2220]: I1213 13:07:45.949048 2220 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:07:45.949110 kubelet[2220]: I1213 13:07:45.949070 2220 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:07:45.955308 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:07:45.973507 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:07:45.976066 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:07:45.983453 kubelet[2220]: I1213 13:07:45.983421 2220 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:07:45.983671 kubelet[2220]: I1213 13:07:45.983619 2220 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:07:45.983967 kubelet[2220]: I1213 13:07:45.983726 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:07:45.985475 kubelet[2220]: E1213 13:07:45.985456 2220 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:07:46.025863 kubelet[2220]: I1213 13:07:46.025828 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:46.026219 kubelet[2220]: E1213 13:07:46.026182 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Dec 13 13:07:46.039498 kubelet[2220]: I1213 13:07:46.039442 2220 topology_manager.go:215] "Topology Admit Handler" podUID="ab9ec74f27ab450d2fa94a1042e19e66" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:07:46.040487 kubelet[2220]: I1213 13:07:46.040448 2220 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:07:46.041710 kubelet[2220]: I1213 13:07:46.041547 2220 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:07:46.046897 systemd[1]: Created slice kubepods-burstable-podab9ec74f27ab450d2fa94a1042e19e66.slice - libcontainer container kubepods-burstable-podab9ec74f27ab450d2fa94a1042e19e66.slice. Dec 13 13:07:46.058224 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 13:07:46.061308 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 13:07:46.127276 kubelet[2220]: E1213 13:07:46.127161 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Dec 13 13:07:46.128252 kubelet[2220]: I1213 13:07:46.128202 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:46.128252 kubelet[2220]: I1213 13:07:46.128235 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:46.128252 kubelet[2220]: I1213 13:07:46.128255 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:46.128500 kubelet[2220]: I1213 13:07:46.128271 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:46.128500 kubelet[2220]: I1213 13:07:46.128285 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:46.128500 kubelet[2220]: I1213 13:07:46.128301 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:46.128500 kubelet[2220]: I1213 13:07:46.128315 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:46.128500 kubelet[2220]: I1213 13:07:46.128329 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:46.128674 kubelet[2220]: I1213 13:07:46.128344 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:46.227612 kubelet[2220]: I1213 13:07:46.227588 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:46.227942 kubelet[2220]: E1213 13:07:46.227917 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Dec 13 13:07:46.357122 kubelet[2220]: E1213 13:07:46.357077 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:46.357857 containerd[1472]: time="2024-12-13T13:07:46.357823670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab9ec74f27ab450d2fa94a1042e19e66,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:46.360407 kubelet[2220]: E1213 13:07:46.360386 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:46.361007 containerd[1472]: time="2024-12-13T13:07:46.360787540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:46.363526 kubelet[2220]: E1213 13:07:46.363441 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:46.364043 containerd[1472]: time="2024-12-13T13:07:46.364013755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 13:07:46.528550 kubelet[2220]: E1213 13:07:46.528438 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Dec 13 13:07:46.629270 kubelet[2220]: I1213 13:07:46.629240 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:46.629608 kubelet[2220]: E1213 13:07:46.629566 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Dec 13 13:07:46.786097 kubelet[2220]: W1213 13:07:46.785973 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:46.786097 kubelet[2220]: E1213 13:07:46.786040 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:46.792920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410252737.mount: Deactivated successfully. Dec 13 13:07:46.797220 containerd[1472]: time="2024-12-13T13:07:46.797179369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:46.799313 containerd[1472]: time="2024-12-13T13:07:46.799267224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:07:46.800243 containerd[1472]: time="2024-12-13T13:07:46.800203657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:46.801392 containerd[1472]: time="2024-12-13T13:07:46.801361809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:46.802515 containerd[1472]: time="2024-12-13T13:07:46.802470561Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:46.803341 containerd[1472]: time="2024-12-13T13:07:46.803281151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:07:46.805731 containerd[1472]: time="2024-12-13T13:07:46.804658658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:07:46.806459 containerd[1472]: time="2024-12-13T13:07:46.806434368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:07:46.808027 containerd[1472]: time="2024-12-13T13:07:46.807998056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 450.098223ms" Dec 13 13:07:46.808855 containerd[1472]: time="2024-12-13T13:07:46.808821266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 444.742166ms" Dec 13 13:07:46.812735 containerd[1472]: time="2024-12-13T13:07:46.812680904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 451.831585ms" Dec 13 13:07:46.948276 containerd[1472]: time="2024-12-13T13:07:46.948178432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:46.948449 containerd[1472]: time="2024-12-13T13:07:46.948245580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:46.948537 containerd[1472]: time="2024-12-13T13:07:46.948374709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.948845 containerd[1472]: time="2024-12-13T13:07:46.948692663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.950282 containerd[1472]: time="2024-12-13T13:07:46.950211037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:46.950282 containerd[1472]: time="2024-12-13T13:07:46.950256711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:46.950282 containerd[1472]: time="2024-12-13T13:07:46.950268811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.950384 containerd[1472]: time="2024-12-13T13:07:46.950331752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.950826 containerd[1472]: time="2024-12-13T13:07:46.950765774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:07:46.950888 containerd[1472]: time="2024-12-13T13:07:46.950823147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:07:46.950888 containerd[1472]: time="2024-12-13T13:07:46.950840735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.950940 containerd[1472]: time="2024-12-13T13:07:46.950905359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:07:46.977923 systemd[1]: Started cri-containerd-32f7f07b4ac737357b7aa8cd39f1223a0317267616095121ef5ae908ef3af3d2.scope - libcontainer container 32f7f07b4ac737357b7aa8cd39f1223a0317267616095121ef5ae908ef3af3d2. Dec 13 13:07:46.979157 systemd[1]: Started cri-containerd-8fe9fba244a9650e8048ea626f524bd9798608ade89072ebc47c23d8a3c0e989.scope - libcontainer container 8fe9fba244a9650e8048ea626f524bd9798608ade89072ebc47c23d8a3c0e989. Dec 13 13:07:46.981046 systemd[1]: Started cri-containerd-aa147e82521755c8acdb5ff1a5319ca01ae0fb0546273b7d06df2163fff931c0.scope - libcontainer container aa147e82521755c8acdb5ff1a5319ca01ae0fb0546273b7d06df2163fff931c0. Dec 13 13:07:47.013286 containerd[1472]: time="2024-12-13T13:07:47.013243956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"32f7f07b4ac737357b7aa8cd39f1223a0317267616095121ef5ae908ef3af3d2\"" Dec 13 13:07:47.013624 containerd[1472]: time="2024-12-13T13:07:47.013474322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab9ec74f27ab450d2fa94a1042e19e66,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe9fba244a9650e8048ea626f524bd9798608ade89072ebc47c23d8a3c0e989\"" Dec 13 13:07:47.014471 kubelet[2220]: E1213 13:07:47.014445 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:47.015224 kubelet[2220]: E1213 13:07:47.015107 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:47.021345 containerd[1472]: time="2024-12-13T13:07:47.021308363Z" level=info msg="CreateContainer within sandbox \"32f7f07b4ac737357b7aa8cd39f1223a0317267616095121ef5ae908ef3af3d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:07:47.021553 containerd[1472]: time="2024-12-13T13:07:47.021526632Z" level=info msg="CreateContainer within sandbox \"8fe9fba244a9650e8048ea626f524bd9798608ade89072ebc47c23d8a3c0e989\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:07:47.023194 containerd[1472]: time="2024-12-13T13:07:47.023165791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa147e82521755c8acdb5ff1a5319ca01ae0fb0546273b7d06df2163fff931c0\"" Dec 13 13:07:47.023836 kubelet[2220]: E1213 13:07:47.023785 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:47.029201 containerd[1472]: time="2024-12-13T13:07:47.029038138Z" level=info msg="CreateContainer within sandbox \"aa147e82521755c8acdb5ff1a5319ca01ae0fb0546273b7d06df2163fff931c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:07:47.044371 containerd[1472]: time="2024-12-13T13:07:47.044098161Z" level=info msg="CreateContainer within sandbox \"8fe9fba244a9650e8048ea626f524bd9798608ade89072ebc47c23d8a3c0e989\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6321a339ff45a0491483d91aa2f839ce84651066ffdf71901c0e794b43c12d7\"" Dec 13 13:07:47.045246 containerd[1472]: time="2024-12-13T13:07:47.044797230Z" level=info msg="StartContainer for \"a6321a339ff45a0491483d91aa2f839ce84651066ffdf71901c0e794b43c12d7\"" Dec 13 13:07:47.045246 containerd[1472]: time="2024-12-13T13:07:47.045105907Z" level=info msg="CreateContainer within sandbox \"32f7f07b4ac737357b7aa8cd39f1223a0317267616095121ef5ae908ef3af3d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17d8bfa0123b3f7519549aa0a3a49e7b123e9fd0e987d92fc1e76d8e5cd0133f\"" Dec 13 13:07:47.046147 containerd[1472]: time="2024-12-13T13:07:47.045613505Z" level=info msg="StartContainer for \"17d8bfa0123b3f7519549aa0a3a49e7b123e9fd0e987d92fc1e76d8e5cd0133f\"" Dec 13 13:07:47.047963 containerd[1472]: time="2024-12-13T13:07:47.047892689Z" level=info msg="CreateContainer within sandbox \"aa147e82521755c8acdb5ff1a5319ca01ae0fb0546273b7d06df2163fff931c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb8554f752679c8be13fa50aed6dcee3399232226e9cf93be4966271e2ec57ab\"" Dec 13 13:07:47.048293 containerd[1472]: time="2024-12-13T13:07:47.048266097Z" level=info msg="StartContainer for \"cb8554f752679c8be13fa50aed6dcee3399232226e9cf93be4966271e2ec57ab\"" Dec 13 13:07:47.076956 systemd[1]: Started cri-containerd-a6321a339ff45a0491483d91aa2f839ce84651066ffdf71901c0e794b43c12d7.scope - libcontainer container a6321a339ff45a0491483d91aa2f839ce84651066ffdf71901c0e794b43c12d7. Dec 13 13:07:47.080538 systemd[1]: Started cri-containerd-17d8bfa0123b3f7519549aa0a3a49e7b123e9fd0e987d92fc1e76d8e5cd0133f.scope - libcontainer container 17d8bfa0123b3f7519549aa0a3a49e7b123e9fd0e987d92fc1e76d8e5cd0133f. Dec 13 13:07:47.081847 systemd[1]: Started cri-containerd-cb8554f752679c8be13fa50aed6dcee3399232226e9cf93be4966271e2ec57ab.scope - libcontainer container cb8554f752679c8be13fa50aed6dcee3399232226e9cf93be4966271e2ec57ab. Dec 13 13:07:47.120143 containerd[1472]: time="2024-12-13T13:07:47.119027674Z" level=info msg="StartContainer for \"a6321a339ff45a0491483d91aa2f839ce84651066ffdf71901c0e794b43c12d7\" returns successfully" Dec 13 13:07:47.120143 containerd[1472]: time="2024-12-13T13:07:47.119127094Z" level=info msg="StartContainer for \"17d8bfa0123b3f7519549aa0a3a49e7b123e9fd0e987d92fc1e76d8e5cd0133f\" returns successfully" Dec 13 13:07:47.146897 containerd[1472]: time="2024-12-13T13:07:47.143331733Z" level=info msg="StartContainer for \"cb8554f752679c8be13fa50aed6dcee3399232226e9cf93be4966271e2ec57ab\" returns successfully" Dec 13 13:07:47.197669 kubelet[2220]: W1213 13:07:47.197562 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:47.197669 kubelet[2220]: E1213 13:07:47.197645 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:47.289834 kubelet[2220]: W1213 13:07:47.289706 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:47.289834 kubelet[2220]: E1213 13:07:47.289792 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Dec 13 13:07:47.431465 kubelet[2220]: I1213 13:07:47.431359 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:47.957592 kubelet[2220]: E1213 13:07:47.957538 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:47.961627 kubelet[2220]: E1213 13:07:47.961008 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:47.965179 kubelet[2220]: E1213 13:07:47.965112 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:48.560871 kubelet[2220]: E1213 13:07:48.560830 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:07:48.648002 kubelet[2220]: I1213 13:07:48.647965 2220 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:07:48.654990 kubelet[2220]: E1213 13:07:48.654960 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:48.696305 kubelet[2220]: E1213 13:07:48.696127 2220 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810be74ebdba637 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:07:45.920165431 +0000 UTC m=+1.188265343,LastTimestamp:2024-12-13 13:07:45.920165431 +0000 UTC m=+1.188265343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:07:48.755328 kubelet[2220]: E1213 13:07:48.755301 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:48.855872 kubelet[2220]: E1213 13:07:48.855769 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:48.956147 kubelet[2220]: E1213 13:07:48.956074 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:48.965838 kubelet[2220]: E1213 13:07:48.965817 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:49.056563 kubelet[2220]: E1213 13:07:49.056503 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:49.157074 kubelet[2220]: E1213 13:07:49.156967 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:49.257515 kubelet[2220]: E1213 13:07:49.257471 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:49.834875 kubelet[2220]: E1213 13:07:49.834830 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:49.919907 kubelet[2220]: I1213 13:07:49.919869 2220 apiserver.go:52] "Watching apiserver" Dec 13 13:07:49.924693 kubelet[2220]: I1213 13:07:49.924638 2220 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:07:49.967354 kubelet[2220]: E1213 13:07:49.966380 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:49.970948 kubelet[2220]: E1213 13:07:49.970920 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:50.684350 systemd[1]: Reloading requested from client PID 2500 ('systemctl') (unit session-5.scope)... Dec 13 13:07:50.684367 systemd[1]: Reloading... Dec 13 13:07:50.760773 zram_generator::config[2543]: No configuration found. Dec 13 13:07:50.906714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:07:50.968129 kubelet[2220]: E1213 13:07:50.968019 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:50.971581 systemd[1]: Reloading finished in 286 ms. Dec 13 13:07:51.002307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:51.015873 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:07:51.016848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:51.016917 systemd[1]: kubelet.service: Consumed 1.592s CPU time, 118.8M memory peak, 0B memory swap peak. Dec 13 13:07:51.028997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:07:51.123718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:07:51.128700 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:07:51.167708 kubelet[2581]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:51.167708 kubelet[2581]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:07:51.167708 kubelet[2581]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:07:51.168065 kubelet[2581]: I1213 13:07:51.167738 2581 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:07:51.171511 kubelet[2581]: I1213 13:07:51.171479 2581 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:07:51.171511 kubelet[2581]: I1213 13:07:51.171507 2581 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:07:51.171728 kubelet[2581]: I1213 13:07:51.171704 2581 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:07:51.172997 kubelet[2581]: I1213 13:07:51.172975 2581 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:07:51.174169 kubelet[2581]: I1213 13:07:51.174132 2581 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:07:51.180878 kubelet[2581]: I1213 13:07:51.180845 2581 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:07:51.181073 kubelet[2581]: I1213 13:07:51.181035 2581 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:07:51.181228 kubelet[2581]: I1213 13:07:51.181067 2581 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:07:51.181228 kubelet[2581]: I1213 13:07:51.181229 2581 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:07:51.181328 kubelet[2581]: I1213 13:07:51.181238 2581 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:07:51.181328 kubelet[2581]: I1213 13:07:51.181273 2581 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:51.181376 kubelet[2581]: I1213 13:07:51.181362 2581 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:07:51.181376 kubelet[2581]: I1213 13:07:51.181373 2581 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:07:51.181414 kubelet[2581]: I1213 13:07:51.181399 2581 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:07:51.181435 kubelet[2581]: I1213 13:07:51.181414 2581 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:07:51.182210 kubelet[2581]: I1213 13:07:51.181996 2581 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:07:51.182210 kubelet[2581]: I1213 13:07:51.182168 2581 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:07:51.182570 kubelet[2581]: I1213 13:07:51.182551 2581 server.go:1264] "Started kubelet" Dec 13 13:07:51.182657 kubelet[2581]: I1213 13:07:51.182623 2581 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:07:51.182848 kubelet[2581]: I1213 13:07:51.182791 2581 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:07:51.183079 kubelet[2581]: I1213 13:07:51.183046 2581 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:07:51.183514 kubelet[2581]: I1213 13:07:51.183489 2581 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:07:51.186750 kubelet[2581]: I1213 13:07:51.184298 2581 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:07:51.186750 kubelet[2581]: E1213 13:07:51.186209 2581 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:07:51.186750 kubelet[2581]: I1213 13:07:51.186234 2581 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:07:51.186750 kubelet[2581]: I1213 13:07:51.186310 2581 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:07:51.186750 kubelet[2581]: I1213 13:07:51.186431 2581 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:07:51.205985 kubelet[2581]: E1213 13:07:51.205905 2581 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:07:51.206849 kubelet[2581]: I1213 13:07:51.206308 2581 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:07:51.206849 kubelet[2581]: I1213 13:07:51.206322 2581 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:07:51.206849 kubelet[2581]: I1213 13:07:51.206422 2581 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:07:51.206849 kubelet[2581]: I1213 13:07:51.206594 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:07:51.209686 kubelet[2581]: I1213 13:07:51.209642 2581 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:07:51.209686 kubelet[2581]: I1213 13:07:51.209689 2581 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:07:51.209801 kubelet[2581]: I1213 13:07:51.209707 2581 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:07:51.209801 kubelet[2581]: E1213 13:07:51.209766 2581 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236195 2581 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236218 2581 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236238 2581 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236379 2581 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236389 2581 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:07:51.236503 kubelet[2581]: I1213 13:07:51.236406 2581 policy_none.go:49] "None policy: Start" Dec 13 13:07:51.237329 kubelet[2581]: I1213 13:07:51.237292 2581 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:07:51.237329 kubelet[2581]: I1213 13:07:51.237319 2581 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:07:51.237459 kubelet[2581]: I1213 13:07:51.237437 2581 state_mem.go:75] "Updated machine memory state" Dec 13 13:07:51.241926 kubelet[2581]: I1213 13:07:51.241893 2581 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:07:51.242405 kubelet[2581]: I1213 13:07:51.242066 2581 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:07:51.242405 kubelet[2581]: I1213 13:07:51.242155 2581 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:07:51.289718 kubelet[2581]: I1213 13:07:51.289677 2581 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:07:51.296018 kubelet[2581]: I1213 13:07:51.295994 2581 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:07:51.296099 kubelet[2581]: I1213 13:07:51.296078 2581 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:07:51.310621 kubelet[2581]: I1213 13:07:51.310492 2581 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:07:51.310621 kubelet[2581]: I1213 13:07:51.310606 2581 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:07:51.310757 kubelet[2581]: I1213 13:07:51.310641 2581 topology_manager.go:215] "Topology Admit Handler" podUID="ab9ec74f27ab450d2fa94a1042e19e66" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:07:51.315362 kubelet[2581]: E1213 13:07:51.315324 2581 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:51.316421 kubelet[2581]: E1213 13:07:51.316393 2581 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:51.487547 kubelet[2581]: I1213 13:07:51.487419 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:51.487547 kubelet[2581]: I1213 13:07:51.487469 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:51.487547 kubelet[2581]: I1213 13:07:51.487492 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:07:51.487547 kubelet[2581]: I1213 13:07:51.487506 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:51.487547 kubelet[2581]: I1213 13:07:51.487522 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:51.487829 kubelet[2581]: I1213 13:07:51.487538 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:51.487829 kubelet[2581]: I1213 13:07:51.487553 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:51.487829 kubelet[2581]: I1213 13:07:51.487568 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab9ec74f27ab450d2fa94a1042e19e66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab9ec74f27ab450d2fa94a1042e19e66\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:07:51.487829 kubelet[2581]: I1213 13:07:51.487582 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:07:51.616851 kubelet[2581]: E1213 13:07:51.616762 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:51.616851 kubelet[2581]: E1213 13:07:51.616829 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:51.617020 kubelet[2581]: E1213 13:07:51.616860 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:52.182632 kubelet[2581]: I1213 13:07:52.182584 2581 apiserver.go:52] "Watching apiserver" Dec 13 13:07:52.187726 kubelet[2581]: I1213 13:07:52.186688 2581 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:07:52.223271 kubelet[2581]: E1213 13:07:52.223230 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:52.224060 kubelet[2581]: E1213 13:07:52.224024 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:52.224701 kubelet[2581]: E1213 13:07:52.224661 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:52.241424 kubelet[2581]: I1213 13:07:52.241362 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.241348965 podStartE2EDuration="1.241348965s" podCreationTimestamp="2024-12-13 13:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:52.241344361 +0000 UTC m=+1.109449512" watchObservedRunningTime="2024-12-13 13:07:52.241348965 +0000 UTC m=+1.109454116" Dec 13 13:07:52.260085 kubelet[2581]: I1213 13:07:52.260004 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.259988178 podStartE2EDuration="3.259988178s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:52.250889091 +0000 UTC m=+1.118994242" watchObservedRunningTime="2024-12-13 13:07:52.259988178 +0000 UTC m=+1.128093329" Dec 13 13:07:52.274354 kubelet[2581]: I1213 13:07:52.274301 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.274283316 podStartE2EDuration="3.274283316s" podCreationTimestamp="2024-12-13 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:07:52.26008761 +0000 UTC m=+1.128192801" watchObservedRunningTime="2024-12-13 13:07:52.274283316 +0000 UTC m=+1.142388467" Dec 13 13:07:52.568904 sudo[1612]: pam_unix(sudo:session): session closed for user root Dec 13 13:07:52.570222 sshd[1611]: Connection closed by 10.0.0.1 port 37642 Dec 13 13:07:52.570562 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Dec 13 13:07:52.573801 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:37642.service: Deactivated successfully. Dec 13 13:07:52.575389 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:07:52.575551 systemd[1]: session-5.scope: Consumed 5.411s CPU time, 189.8M memory peak, 0B memory swap peak. Dec 13 13:07:52.576618 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:07:52.577599 systemd-logind[1457]: Removed session 5. Dec 13 13:07:53.224344 kubelet[2581]: E1213 13:07:53.224304 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:54.086413 kubelet[2581]: E1213 13:07:54.086201 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:56.502873 kubelet[2581]: E1213 13:07:56.502838 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:57.608766 kubelet[2581]: E1213 13:07:57.608718 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:58.232495 kubelet[2581]: E1213 13:07:58.232346 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:07:59.234054 kubelet[2581]: E1213 13:07:59.234028 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:04.093779 kubelet[2581]: E1213 13:08:04.093665 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:05.209554 kubelet[2581]: I1213 13:08:05.209488 2581 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:08:05.209914 containerd[1472]: time="2024-12-13T13:08:05.209833739Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:08:05.211286 kubelet[2581]: I1213 13:08:05.210846 2581 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:08:05.873345 kubelet[2581]: I1213 13:08:05.872590 2581 topology_manager.go:215] "Topology Admit Handler" podUID="ce2e7cdf-5dec-4821-9c30-c69208f29e3b" podNamespace="kube-system" podName="kube-proxy-hsmsd" Dec 13 13:08:05.873345 kubelet[2581]: I1213 13:08:05.872793 2581 topology_manager.go:215] "Topology Admit Handler" podUID="f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7" podNamespace="kube-flannel" podName="kube-flannel-ds-wn7zq" Dec 13 13:08:05.886164 systemd[1]: Created slice kubepods-burstable-podf00c5ddc_2128_4a09_9fe1_e5a7da25dcf7.slice - libcontainer container kubepods-burstable-podf00c5ddc_2128_4a09_9fe1_e5a7da25dcf7.slice. Dec 13 13:08:05.897996 systemd[1]: Created slice kubepods-besteffort-podce2e7cdf_5dec_4821_9c30_c69208f29e3b.slice - libcontainer container kubepods-besteffort-podce2e7cdf_5dec_4821_9c30_c69208f29e3b.slice. Dec 13 13:08:05.979924 kubelet[2581]: I1213 13:08:05.979843 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce2e7cdf-5dec-4821-9c30-c69208f29e3b-lib-modules\") pod \"kube-proxy-hsmsd\" (UID: \"ce2e7cdf-5dec-4821-9c30-c69208f29e3b\") " pod="kube-system/kube-proxy-hsmsd" Dec 13 13:08:05.980323 kubelet[2581]: I1213 13:08:05.979893 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-cni-plugin\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980323 kubelet[2581]: I1213 13:08:05.980139 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-xtables-lock\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980323 kubelet[2581]: I1213 13:08:05.980165 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce2e7cdf-5dec-4821-9c30-c69208f29e3b-kube-proxy\") pod \"kube-proxy-hsmsd\" (UID: \"ce2e7cdf-5dec-4821-9c30-c69208f29e3b\") " pod="kube-system/kube-proxy-hsmsd" Dec 13 13:08:05.980323 kubelet[2581]: I1213 13:08:05.980180 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-run\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980323 kubelet[2581]: I1213 13:08:05.980195 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-cni\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980453 kubelet[2581]: I1213 13:08:05.980212 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-flannel-cfg\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980453 kubelet[2581]: I1213 13:08:05.980228 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8r4\" (UniqueName: \"kubernetes.io/projected/f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7-kube-api-access-vz8r4\") pod \"kube-flannel-ds-wn7zq\" (UID: \"f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7\") " pod="kube-flannel/kube-flannel-ds-wn7zq" Dec 13 13:08:05.980453 kubelet[2581]: I1213 13:08:05.980247 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce2e7cdf-5dec-4821-9c30-c69208f29e3b-xtables-lock\") pod \"kube-proxy-hsmsd\" (UID: \"ce2e7cdf-5dec-4821-9c30-c69208f29e3b\") " pod="kube-system/kube-proxy-hsmsd" Dec 13 13:08:05.980453 kubelet[2581]: I1213 13:08:05.980264 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnpg\" (UniqueName: \"kubernetes.io/projected/ce2e7cdf-5dec-4821-9c30-c69208f29e3b-kube-api-access-jlnpg\") pod \"kube-proxy-hsmsd\" (UID: \"ce2e7cdf-5dec-4821-9c30-c69208f29e3b\") " pod="kube-system/kube-proxy-hsmsd" Dec 13 13:08:06.191499 kubelet[2581]: E1213 13:08:06.191376 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.192491 containerd[1472]: time="2024-12-13T13:08:06.192442669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wn7zq,Uid:f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7,Namespace:kube-flannel,Attempt:0,}" Dec 13 13:08:06.209338 kubelet[2581]: E1213 13:08:06.209295 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.211518 containerd[1472]: time="2024-12-13T13:08:06.211458238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hsmsd,Uid:ce2e7cdf-5dec-4821-9c30-c69208f29e3b,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:06.214913 containerd[1472]: time="2024-12-13T13:08:06.214477265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:06.214913 containerd[1472]: time="2024-12-13T13:08:06.214539091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:06.214913 containerd[1472]: time="2024-12-13T13:08:06.214555937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:06.214913 containerd[1472]: time="2024-12-13T13:08:06.214629527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:06.234883 containerd[1472]: time="2024-12-13T13:08:06.234781639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:06.235007 containerd[1472]: time="2024-12-13T13:08:06.234932460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:06.235494 containerd[1472]: time="2024-12-13T13:08:06.235338025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:06.235494 containerd[1472]: time="2024-12-13T13:08:06.235451511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:06.235958 systemd[1]: Started cri-containerd-9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708.scope - libcontainer container 9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708. Dec 13 13:08:06.259997 systemd[1]: Started cri-containerd-16dbadee24abf10225b145839782b03d2ed3dd13982c05b2291af7ddb1a79e45.scope - libcontainer container 16dbadee24abf10225b145839782b03d2ed3dd13982c05b2291af7ddb1a79e45. Dec 13 13:08:06.296838 containerd[1472]: time="2024-12-13T13:08:06.296774918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wn7zq,Uid:f00c5ddc-2128-4a09-9fe1-e5a7da25dcf7,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\"" Dec 13 13:08:06.298160 kubelet[2581]: E1213 13:08:06.298135 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.300423 containerd[1472]: time="2024-12-13T13:08:06.300376342Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 13:08:06.305153 containerd[1472]: time="2024-12-13T13:08:06.305117549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hsmsd,Uid:ce2e7cdf-5dec-4821-9c30-c69208f29e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"16dbadee24abf10225b145839782b03d2ed3dd13982c05b2291af7ddb1a79e45\"" Dec 13 13:08:06.305781 kubelet[2581]: E1213 13:08:06.305737 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:06.308510 containerd[1472]: time="2024-12-13T13:08:06.308447943Z" level=info msg="CreateContainer within sandbox \"16dbadee24abf10225b145839782b03d2ed3dd13982c05b2291af7ddb1a79e45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:08:06.322654 containerd[1472]: time="2024-12-13T13:08:06.322605937Z" level=info msg="CreateContainer within sandbox \"16dbadee24abf10225b145839782b03d2ed3dd13982c05b2291af7ddb1a79e45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4302696ed9e86f22e39e6c8ac4bafcaeed31c195c66f32d2bd5b18801fd9c93c\"" Dec 13 13:08:06.323242 containerd[1472]: time="2024-12-13T13:08:06.323203580Z" level=info msg="StartContainer for \"4302696ed9e86f22e39e6c8ac4bafcaeed31c195c66f32d2bd5b18801fd9c93c\"" Dec 13 13:08:06.351975 systemd[1]: Started cri-containerd-4302696ed9e86f22e39e6c8ac4bafcaeed31c195c66f32d2bd5b18801fd9c93c.scope - libcontainer container 4302696ed9e86f22e39e6c8ac4bafcaeed31c195c66f32d2bd5b18801fd9c93c. Dec 13 13:08:06.375175 containerd[1472]: time="2024-12-13T13:08:06.375127406Z" level=info msg="StartContainer for \"4302696ed9e86f22e39e6c8ac4bafcaeed31c195c66f32d2bd5b18801fd9c93c\" returns successfully" Dec 13 13:08:06.510256 kubelet[2581]: E1213 13:08:06.509721 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:07.124861 update_engine[1462]: I20241213 13:08:07.124787 1462 update_attempter.cc:509] Updating boot flags... Dec 13 13:08:07.150104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2894) Dec 13 13:08:07.254870 kubelet[2581]: E1213 13:08:07.252244 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:07.261937 kubelet[2581]: I1213 13:08:07.261880 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hsmsd" podStartSLOduration=2.261863682 podStartE2EDuration="2.261863682s" podCreationTimestamp="2024-12-13 13:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:08:07.261795816 +0000 UTC m=+16.129900967" watchObservedRunningTime="2024-12-13 13:08:07.261863682 +0000 UTC m=+16.129968833" Dec 13 13:08:07.388820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807508232.mount: Deactivated successfully. Dec 13 13:08:07.502860 containerd[1472]: time="2024-12-13T13:08:07.502813904Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.503914 containerd[1472]: time="2024-12-13T13:08:07.503866551Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Dec 13 13:08:07.504917 containerd[1472]: time="2024-12-13T13:08:07.504868218Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.506956 containerd[1472]: time="2024-12-13T13:08:07.506927173Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:07.507816 containerd[1472]: time="2024-12-13T13:08:07.507783264Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.207357702s" Dec 13 13:08:07.507864 containerd[1472]: time="2024-12-13T13:08:07.507815556Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 13:08:07.510824 containerd[1472]: time="2024-12-13T13:08:07.510665017Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 13:08:07.522979 containerd[1472]: time="2024-12-13T13:08:07.522866729Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0\"" Dec 13 13:08:07.523373 containerd[1472]: time="2024-12-13T13:08:07.523346115Z" level=info msg="StartContainer for \"a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0\"" Dec 13 13:08:07.546924 systemd[1]: Started cri-containerd-a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0.scope - libcontainer container a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0. Dec 13 13:08:07.569787 containerd[1472]: time="2024-12-13T13:08:07.569676849Z" level=info msg="StartContainer for \"a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0\" returns successfully" Dec 13 13:08:07.571915 systemd[1]: cri-containerd-a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0.scope: Deactivated successfully. Dec 13 13:08:07.627565 containerd[1472]: time="2024-12-13T13:08:07.627505704Z" level=info msg="shim disconnected" id=a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0 namespace=k8s.io Dec 13 13:08:07.627565 containerd[1472]: time="2024-12-13T13:08:07.627559645Z" level=warning msg="cleaning up after shim disconnected" id=a328f45ed9c8eaee2069d49c8d1dbf7d0d2c7abfb5c17e2909060c3a1a3eacf0 namespace=k8s.io Dec 13 13:08:07.627565 containerd[1472]: time="2024-12-13T13:08:07.627567488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:08:08.262009 kubelet[2581]: E1213 13:08:08.261971 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:08.269188 containerd[1472]: time="2024-12-13T13:08:08.268894818Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 13:08:09.429018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304770666.mount: Deactivated successfully. Dec 13 13:08:09.959875 containerd[1472]: time="2024-12-13T13:08:09.959832182Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:09.960521 containerd[1472]: time="2024-12-13T13:08:09.960486051Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Dec 13 13:08:09.962851 containerd[1472]: time="2024-12-13T13:08:09.961680108Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:09.970695 containerd[1472]: time="2024-12-13T13:08:09.970628275Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:08:09.971841 containerd[1472]: time="2024-12-13T13:08:09.971802205Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.702869093s" Dec 13 13:08:09.971881 containerd[1472]: time="2024-12-13T13:08:09.971839338Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 13:08:09.983165 containerd[1472]: time="2024-12-13T13:08:09.982997598Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:08:09.997543 containerd[1472]: time="2024-12-13T13:08:09.997494183Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe\"" Dec 13 13:08:10.016768 containerd[1472]: time="2024-12-13T13:08:10.016704127Z" level=info msg="StartContainer for \"36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe\"" Dec 13 13:08:10.045971 systemd[1]: Started cri-containerd-36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe.scope - libcontainer container 36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe. Dec 13 13:08:10.070772 systemd[1]: cri-containerd-36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe.scope: Deactivated successfully. Dec 13 13:08:10.073299 containerd[1472]: time="2024-12-13T13:08:10.073067723Z" level=info msg="StartContainer for \"36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe\" returns successfully" Dec 13 13:08:10.096672 containerd[1472]: time="2024-12-13T13:08:10.096616399Z" level=info msg="shim disconnected" id=36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe namespace=k8s.io Dec 13 13:08:10.096672 containerd[1472]: time="2024-12-13T13:08:10.096667616Z" level=warning msg="cleaning up after shim disconnected" id=36592025bd08c5f4b06e8192c1f221a7c12c985c1d40192a6b8943983b452dfe namespace=k8s.io Dec 13 13:08:10.096672 containerd[1472]: time="2024-12-13T13:08:10.096676259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:08:10.137752 kubelet[2581]: I1213 13:08:10.135874 2581 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:08:10.160813 kubelet[2581]: I1213 13:08:10.160754 2581 topology_manager.go:215] "Topology Admit Handler" podUID="7e36c0fe-177e-4f30-a6e5-74bc035e391a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7ctxl" Dec 13 13:08:10.163350 kubelet[2581]: I1213 13:08:10.163209 2581 topology_manager.go:215] "Topology Admit Handler" podUID="c50cee74-3d61-4b4b-ae2e-76615528aaf4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r2mst" Dec 13 13:08:10.175037 systemd[1]: Created slice kubepods-burstable-pod7e36c0fe_177e_4f30_a6e5_74bc035e391a.slice - libcontainer container kubepods-burstable-pod7e36c0fe_177e_4f30_a6e5_74bc035e391a.slice. Dec 13 13:08:10.183652 systemd[1]: Created slice kubepods-burstable-podc50cee74_3d61_4b4b_ae2e_76615528aaf4.slice - libcontainer container kubepods-burstable-podc50cee74_3d61_4b4b_ae2e_76615528aaf4.slice. Dec 13 13:08:10.277145 kubelet[2581]: E1213 13:08:10.277033 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:10.281306 containerd[1472]: time="2024-12-13T13:08:10.281130119Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 13:08:10.303226 kubelet[2581]: I1213 13:08:10.303065 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c50cee74-3d61-4b4b-ae2e-76615528aaf4-config-volume\") pod \"coredns-7db6d8ff4d-r2mst\" (UID: \"c50cee74-3d61-4b4b-ae2e-76615528aaf4\") " pod="kube-system/coredns-7db6d8ff4d-r2mst" Dec 13 13:08:10.303226 kubelet[2581]: I1213 13:08:10.303130 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6r2f\" (UniqueName: \"kubernetes.io/projected/7e36c0fe-177e-4f30-a6e5-74bc035e391a-kube-api-access-h6r2f\") pod \"coredns-7db6d8ff4d-7ctxl\" (UID: \"7e36c0fe-177e-4f30-a6e5-74bc035e391a\") " pod="kube-system/coredns-7db6d8ff4d-7ctxl" Dec 13 13:08:10.303226 kubelet[2581]: I1213 13:08:10.303153 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzbd\" (UniqueName: \"kubernetes.io/projected/c50cee74-3d61-4b4b-ae2e-76615528aaf4-kube-api-access-ltzbd\") pod \"coredns-7db6d8ff4d-r2mst\" (UID: \"c50cee74-3d61-4b4b-ae2e-76615528aaf4\") " pod="kube-system/coredns-7db6d8ff4d-r2mst" Dec 13 13:08:10.303226 kubelet[2581]: I1213 13:08:10.303170 2581 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e36c0fe-177e-4f30-a6e5-74bc035e391a-config-volume\") pod \"coredns-7db6d8ff4d-7ctxl\" (UID: \"7e36c0fe-177e-4f30-a6e5-74bc035e391a\") " pod="kube-system/coredns-7db6d8ff4d-7ctxl" Dec 13 13:08:10.323333 containerd[1472]: time="2024-12-13T13:08:10.323275543Z" level=info msg="CreateContainer within sandbox \"9b1f65c87f7ba5897d28a4077756d442396c53c684bed013d61666479169d708\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a359e6dbea740313752c8b9dafff9391d7aec97f6a19de58be057ae3a4a8ceb4\"" Dec 13 13:08:10.323895 containerd[1472]: time="2024-12-13T13:08:10.323828727Z" level=info msg="StartContainer for \"a359e6dbea740313752c8b9dafff9391d7aec97f6a19de58be057ae3a4a8ceb4\"" Dec 13 13:08:10.359942 systemd[1]: Started cri-containerd-a359e6dbea740313752c8b9dafff9391d7aec97f6a19de58be057ae3a4a8ceb4.scope - libcontainer container a359e6dbea740313752c8b9dafff9391d7aec97f6a19de58be057ae3a4a8ceb4. Dec 13 13:08:10.406043 containerd[1472]: time="2024-12-13T13:08:10.404913870Z" level=info msg="StartContainer for \"a359e6dbea740313752c8b9dafff9391d7aec97f6a19de58be057ae3a4a8ceb4\" returns successfully" Dec 13 13:08:10.481280 kubelet[2581]: E1213 13:08:10.481249 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:10.482227 containerd[1472]: time="2024-12-13T13:08:10.482189384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ctxl,Uid:7e36c0fe-177e-4f30-a6e5-74bc035e391a,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:10.491177 kubelet[2581]: E1213 13:08:10.490859 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:10.492874 containerd[1472]: time="2024-12-13T13:08:10.492824563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r2mst,Uid:c50cee74-3d61-4b4b-ae2e-76615528aaf4,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:10.530714 containerd[1472]: time="2024-12-13T13:08:10.530540154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ctxl,Uid:7e36c0fe-177e-4f30-a6e5-74bc035e391a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:08:10.531291 kubelet[2581]: E1213 13:08:10.530937 2581 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:08:10.531291 kubelet[2581]: E1213 13:08:10.531007 2581 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-7ctxl" Dec 13 13:08:10.531291 kubelet[2581]: E1213 13:08:10.531027 2581 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-7ctxl" Dec 13 13:08:10.531291 kubelet[2581]: E1213 13:08:10.531074 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7ctxl_kube-system(7e36c0fe-177e-4f30-a6e5-74bc035e391a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7ctxl_kube-system(7e36c0fe-177e-4f30-a6e5-74bc035e391a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-7ctxl" podUID="7e36c0fe-177e-4f30-a6e5-74bc035e391a" Dec 13 13:08:10.531503 containerd[1472]: time="2024-12-13T13:08:10.531374311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r2mst,Uid:c50cee74-3d61-4b4b-ae2e-76615528aaf4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:08:10.531873 kubelet[2581]: E1213 13:08:10.531694 2581 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:08:10.531873 kubelet[2581]: E1213 13:08:10.531767 2581 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r2mst" Dec 13 13:08:10.531873 kubelet[2581]: E1213 13:08:10.531785 2581 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r2mst" Dec 13 13:08:10.531873 kubelet[2581]: E1213 13:08:10.531845 2581 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r2mst_kube-system(c50cee74-3d61-4b4b-ae2e-76615528aaf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r2mst_kube-system(c50cee74-3d61-4b4b-ae2e-76615528aaf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-r2mst" podUID="c50cee74-3d61-4b4b-ae2e-76615528aaf4" Dec 13 13:08:11.284919 kubelet[2581]: E1213 13:08:11.284889 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:11.297848 kubelet[2581]: I1213 13:08:11.297773 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wn7zq" podStartSLOduration=2.618359249 podStartE2EDuration="6.297731891s" podCreationTimestamp="2024-12-13 13:08:05 +0000 UTC" firstStartedPulling="2024-12-13 13:08:06.299652007 +0000 UTC m=+15.167757158" lastFinishedPulling="2024-12-13 13:08:09.979024649 +0000 UTC m=+18.847129800" observedRunningTime="2024-12-13 13:08:11.296822522 +0000 UTC m=+20.164927673" watchObservedRunningTime="2024-12-13 13:08:11.297731891 +0000 UTC m=+20.165837002" Dec 13 13:08:11.361495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db1b7a1adfa4de8cb1175fe2740624dd91d7e75dd2db22d14b69d299d1222cfd-shm.mount: Deactivated successfully. Dec 13 13:08:11.361595 systemd[1]: run-netns-cni\x2d15416b86\x2dd33f\x2d616f\x2dd9ab\x2df1f8c0176351.mount: Deactivated successfully. Dec 13 13:08:11.361640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05b5dfd2023818d92bee09fba5be9d5283b39e13d7d970d25d6c7f7f8d8dc54d-shm.mount: Deactivated successfully. Dec 13 13:08:11.501572 systemd-networkd[1403]: flannel.1: Link UP Dec 13 13:08:11.501578 systemd-networkd[1403]: flannel.1: Gained carrier Dec 13 13:08:12.286878 kubelet[2581]: E1213 13:08:12.286849 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:12.947897 systemd-networkd[1403]: flannel.1: Gained IPv6LL Dec 13 13:08:15.514459 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:35190.service - OpenSSH per-connection server daemon (10.0.0.1:35190). Dec 13 13:08:15.563402 sshd[3232]: Accepted publickey for core from 10.0.0.1 port 35190 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:15.564725 sshd-session[3232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:15.569318 systemd-logind[1457]: New session 6 of user core. Dec 13 13:08:15.575916 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:08:15.699227 sshd[3234]: Connection closed by 10.0.0.1 port 35190 Dec 13 13:08:15.699975 sshd-session[3232]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:15.704337 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:35190.service: Deactivated successfully. Dec 13 13:08:15.706209 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:08:15.707805 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:08:15.708617 systemd-logind[1457]: Removed session 6. Dec 13 13:08:20.720475 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:35204.service - OpenSSH per-connection server daemon (10.0.0.1:35204). Dec 13 13:08:20.767703 sshd[3272]: Accepted publickey for core from 10.0.0.1 port 35204 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:20.768946 sshd-session[3272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:20.772767 systemd-logind[1457]: New session 7 of user core. Dec 13 13:08:20.788040 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:08:20.912105 sshd[3274]: Connection closed by 10.0.0.1 port 35204 Dec 13 13:08:20.911440 sshd-session[3272]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:20.914298 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:35204.service: Deactivated successfully. Dec 13 13:08:20.915886 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:08:20.917681 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:08:20.918763 systemd-logind[1457]: Removed session 7. Dec 13 13:08:22.210730 kubelet[2581]: E1213 13:08:22.210701 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:22.211759 containerd[1472]: time="2024-12-13T13:08:22.211238228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r2mst,Uid:c50cee74-3d61-4b4b-ae2e-76615528aaf4,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:22.231045 systemd-networkd[1403]: cni0: Link UP Dec 13 13:08:22.231381 systemd-networkd[1403]: cni0: Gained carrier Dec 13 13:08:22.234063 systemd-networkd[1403]: cni0: Lost carrier Dec 13 13:08:22.235618 systemd-networkd[1403]: vethbaaaab29: Link UP Dec 13 13:08:22.237323 kernel: cni0: port 1(vethbaaaab29) entered blocking state Dec 13 13:08:22.237364 kernel: cni0: port 1(vethbaaaab29) entered disabled state Dec 13 13:08:22.238948 kernel: vethbaaaab29: entered allmulticast mode Dec 13 13:08:22.238980 kernel: vethbaaaab29: entered promiscuous mode Dec 13 13:08:22.239841 kernel: cni0: port 1(vethbaaaab29) entered blocking state Dec 13 13:08:22.239874 kernel: cni0: port 1(vethbaaaab29) entered forwarding state Dec 13 13:08:22.241769 kernel: cni0: port 1(vethbaaaab29) entered disabled state Dec 13 13:08:22.249836 kernel: cni0: port 1(vethbaaaab29) entered blocking state Dec 13 13:08:22.249903 kernel: cni0: port 1(vethbaaaab29) entered forwarding state Dec 13 13:08:22.249981 systemd-networkd[1403]: vethbaaaab29: Gained carrier Dec 13 13:08:22.250195 systemd-networkd[1403]: cni0: Gained carrier Dec 13 13:08:22.252061 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Dec 13 13:08:22.252061 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:08:22.267311 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:08:22.267213421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:22.267311 containerd[1472]: time="2024-12-13T13:08:22.267269592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:22.267311 containerd[1472]: time="2024-12-13T13:08:22.267280234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:22.267685 containerd[1472]: time="2024-12-13T13:08:22.267349968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:22.296953 systemd[1]: Started cri-containerd-ba65e6fcebcc8aa12e87f9447f698e4defba7dc8ddf0cd97dbbbf210fc243eb0.scope - libcontainer container ba65e6fcebcc8aa12e87f9447f698e4defba7dc8ddf0cd97dbbbf210fc243eb0. Dec 13 13:08:22.308389 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:22.325003 containerd[1472]: time="2024-12-13T13:08:22.324962326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r2mst,Uid:c50cee74-3d61-4b4b-ae2e-76615528aaf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba65e6fcebcc8aa12e87f9447f698e4defba7dc8ddf0cd97dbbbf210fc243eb0\"" Dec 13 13:08:22.326528 kubelet[2581]: E1213 13:08:22.326505 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:22.330462 containerd[1472]: time="2024-12-13T13:08:22.330324747Z" level=info msg="CreateContainer within sandbox \"ba65e6fcebcc8aa12e87f9447f698e4defba7dc8ddf0cd97dbbbf210fc243eb0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:08:22.346007 containerd[1472]: time="2024-12-13T13:08:22.345961320Z" level=info msg="CreateContainer within sandbox \"ba65e6fcebcc8aa12e87f9447f698e4defba7dc8ddf0cd97dbbbf210fc243eb0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69f90f93cc75f9287a38d5177a564ed72fdadd81c702669e3d9ecacbee0d3f50\"" Dec 13 13:08:22.347544 containerd[1472]: time="2024-12-13T13:08:22.346795365Z" level=info msg="StartContainer for \"69f90f93cc75f9287a38d5177a564ed72fdadd81c702669e3d9ecacbee0d3f50\"" Dec 13 13:08:22.370933 systemd[1]: Started cri-containerd-69f90f93cc75f9287a38d5177a564ed72fdadd81c702669e3d9ecacbee0d3f50.scope - libcontainer container 69f90f93cc75f9287a38d5177a564ed72fdadd81c702669e3d9ecacbee0d3f50. Dec 13 13:08:22.396084 containerd[1472]: time="2024-12-13T13:08:22.396022504Z" level=info msg="StartContainer for \"69f90f93cc75f9287a38d5177a564ed72fdadd81c702669e3d9ecacbee0d3f50\" returns successfully" Dec 13 13:08:23.307176 kubelet[2581]: E1213 13:08:23.307108 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:23.325583 kubelet[2581]: I1213 13:08:23.325505 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r2mst" podStartSLOduration=17.325487797 podStartE2EDuration="17.325487797s" podCreationTimestamp="2024-12-13 13:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:08:23.316934646 +0000 UTC m=+32.185039797" watchObservedRunningTime="2024-12-13 13:08:23.325487797 +0000 UTC m=+32.193592948" Dec 13 13:08:23.699938 systemd-networkd[1403]: cni0: Gained IPv6LL Dec 13 13:08:24.083893 systemd-networkd[1403]: vethbaaaab29: Gained IPv6LL Dec 13 13:08:24.211176 kubelet[2581]: E1213 13:08:24.211043 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:24.211614 containerd[1472]: time="2024-12-13T13:08:24.211404850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ctxl,Uid:7e36c0fe-177e-4f30-a6e5-74bc035e391a,Namespace:kube-system,Attempt:0,}" Dec 13 13:08:24.246543 systemd-networkd[1403]: veth663e4a16: Link UP Dec 13 13:08:24.248260 kernel: cni0: port 2(veth663e4a16) entered blocking state Dec 13 13:08:24.248330 kernel: cni0: port 2(veth663e4a16) entered disabled state Dec 13 13:08:24.248347 kernel: veth663e4a16: entered allmulticast mode Dec 13 13:08:24.249764 kernel: veth663e4a16: entered promiscuous mode Dec 13 13:08:24.257656 kernel: cni0: port 2(veth663e4a16) entered blocking state Dec 13 13:08:24.257715 kernel: cni0: port 2(veth663e4a16) entered forwarding state Dec 13 13:08:24.258465 systemd-networkd[1403]: veth663e4a16: Gained carrier Dec 13 13:08:24.264099 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Dec 13 13:08:24.264099 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:08:24.282816 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T13:08:24.282603500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:08:24.282816 containerd[1472]: time="2024-12-13T13:08:24.282667392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:08:24.282816 containerd[1472]: time="2024-12-13T13:08:24.282688796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:24.283073 containerd[1472]: time="2024-12-13T13:08:24.282971008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:08:24.302918 systemd[1]: Started cri-containerd-bf778237cd0b840cf7cdde6ee984486b212cd13ac5b7c5f1a4e3f69af0ee65e0.scope - libcontainer container bf778237cd0b840cf7cdde6ee984486b212cd13ac5b7c5f1a4e3f69af0ee65e0. Dec 13 13:08:24.308170 kubelet[2581]: E1213 13:08:24.308140 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:24.314231 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:08:24.340782 containerd[1472]: time="2024-12-13T13:08:24.340594882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7ctxl,Uid:7e36c0fe-177e-4f30-a6e5-74bc035e391a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf778237cd0b840cf7cdde6ee984486b212cd13ac5b7c5f1a4e3f69af0ee65e0\"" Dec 13 13:08:24.341949 kubelet[2581]: E1213 13:08:24.341915 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:24.344325 containerd[1472]: time="2024-12-13T13:08:24.344289602Z" level=info msg="CreateContainer within sandbox \"bf778237cd0b840cf7cdde6ee984486b212cd13ac5b7c5f1a4e3f69af0ee65e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:08:24.356145 containerd[1472]: time="2024-12-13T13:08:24.356102173Z" level=info msg="CreateContainer within sandbox \"bf778237cd0b840cf7cdde6ee984486b212cd13ac5b7c5f1a4e3f69af0ee65e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c987d1c559b8f9a02d13b633b5496b9a7b4dae8cf30b983a6a710e7fe0d1c290\"" Dec 13 13:08:24.356627 containerd[1472]: time="2024-12-13T13:08:24.356606466Z" level=info msg="StartContainer for \"c987d1c559b8f9a02d13b633b5496b9a7b4dae8cf30b983a6a710e7fe0d1c290\"" Dec 13 13:08:24.383973 systemd[1]: Started cri-containerd-c987d1c559b8f9a02d13b633b5496b9a7b4dae8cf30b983a6a710e7fe0d1c290.scope - libcontainer container c987d1c559b8f9a02d13b633b5496b9a7b4dae8cf30b983a6a710e7fe0d1c290. Dec 13 13:08:24.406137 containerd[1472]: time="2024-12-13T13:08:24.406083763Z" level=info msg="StartContainer for \"c987d1c559b8f9a02d13b633b5496b9a7b4dae8cf30b983a6a710e7fe0d1c290\" returns successfully" Dec 13 13:08:25.311734 kubelet[2581]: E1213 13:08:25.311634 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:25.311734 kubelet[2581]: E1213 13:08:25.311642 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:25.320397 kubelet[2581]: I1213 13:08:25.320347 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7ctxl" podStartSLOduration=19.320332469 podStartE2EDuration="19.320332469s" podCreationTimestamp="2024-12-13 13:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:08:25.320142595 +0000 UTC m=+34.188247746" watchObservedRunningTime="2024-12-13 13:08:25.320332469 +0000 UTC m=+34.188437620" Dec 13 13:08:25.683908 systemd-networkd[1403]: veth663e4a16: Gained IPv6LL Dec 13 13:08:25.926696 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:53942.service - OpenSSH per-connection server daemon (10.0.0.1:53942). Dec 13 13:08:25.975328 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 53942 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:25.976729 sshd-session[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:25.980818 systemd-logind[1457]: New session 8 of user core. Dec 13 13:08:25.987916 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:08:26.099600 sshd[3545]: Connection closed by 10.0.0.1 port 53942 Dec 13 13:08:26.101016 sshd-session[3543]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:26.116142 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:53942.service: Deactivated successfully. Dec 13 13:08:26.117722 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:08:26.118924 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:08:26.129993 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:53948.service - OpenSSH per-connection server daemon (10.0.0.1:53948). Dec 13 13:08:26.131248 systemd-logind[1457]: Removed session 8. Dec 13 13:08:26.171293 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 53948 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:26.172531 sshd-session[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:26.176039 systemd-logind[1457]: New session 9 of user core. Dec 13 13:08:26.187917 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:08:26.330791 sshd[3561]: Connection closed by 10.0.0.1 port 53948 Dec 13 13:08:26.330623 sshd-session[3559]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:26.339491 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:53948.service: Deactivated successfully. Dec 13 13:08:26.342092 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:08:26.346964 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:08:26.355078 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). Dec 13 13:08:26.355552 systemd-logind[1457]: Removed session 9. Dec 13 13:08:26.398611 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:26.400033 sshd-session[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:26.404822 systemd-logind[1457]: New session 10 of user core. Dec 13 13:08:26.415917 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:08:26.533157 sshd[3573]: Connection closed by 10.0.0.1 port 53964 Dec 13 13:08:26.533657 sshd-session[3571]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:26.536124 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:53964.service: Deactivated successfully. Dec 13 13:08:26.537911 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:08:26.539729 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:08:26.540627 systemd-logind[1457]: Removed session 10. Dec 13 13:08:30.482660 kubelet[2581]: E1213 13:08:30.482507 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:31.320161 kubelet[2581]: E1213 13:08:31.320112 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:08:31.548479 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). Dec 13 13:08:31.594124 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:31.595234 sshd-session[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:31.599099 systemd-logind[1457]: New session 11 of user core. Dec 13 13:08:31.609923 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:08:31.721718 sshd[3621]: Connection closed by 10.0.0.1 port 53976 Dec 13 13:08:31.722882 sshd-session[3613]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:31.738219 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:53976.service: Deactivated successfully. Dec 13 13:08:31.739657 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:08:31.741797 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:08:31.750007 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). Dec 13 13:08:31.751471 systemd-logind[1457]: Removed session 11. Dec 13 13:08:31.794715 sshd[3648]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:31.799903 sshd-session[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:31.803555 systemd-logind[1457]: New session 12 of user core. Dec 13 13:08:31.810900 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:08:32.006412 sshd[3650]: Connection closed by 10.0.0.1 port 53992 Dec 13 13:08:32.006680 sshd-session[3648]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:32.018315 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:53992.service: Deactivated successfully. Dec 13 13:08:32.020888 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:08:32.023120 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:08:32.024753 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:54000.service - OpenSSH per-connection server daemon (10.0.0.1:54000). Dec 13 13:08:32.025719 systemd-logind[1457]: Removed session 12. Dec 13 13:08:32.072190 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 54000 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:32.073473 sshd-session[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:32.077690 systemd-logind[1457]: New session 13 of user core. Dec 13 13:08:32.084914 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:08:33.208766 sshd[3663]: Connection closed by 10.0.0.1 port 54000 Dec 13 13:08:33.209287 sshd-session[3661]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:33.218855 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:54000.service: Deactivated successfully. Dec 13 13:08:33.222078 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:08:33.222849 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:08:33.235234 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:34012.service - OpenSSH per-connection server daemon (10.0.0.1:34012). Dec 13 13:08:33.236760 systemd-logind[1457]: Removed session 13. Dec 13 13:08:33.276866 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 34012 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:33.278031 sshd-session[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:33.281817 systemd-logind[1457]: New session 14 of user core. Dec 13 13:08:33.293886 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:08:33.511784 sshd[3686]: Connection closed by 10.0.0.1 port 34012 Dec 13 13:08:33.512472 sshd-session[3684]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:33.524784 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:34012.service: Deactivated successfully. Dec 13 13:08:33.526462 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:08:33.529805 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:08:33.537071 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:34016.service - OpenSSH per-connection server daemon (10.0.0.1:34016). Dec 13 13:08:33.537858 systemd-logind[1457]: Removed session 14. Dec 13 13:08:33.579212 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 34016 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:33.580431 sshd-session[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:33.584078 systemd-logind[1457]: New session 15 of user core. Dec 13 13:08:33.594895 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:08:33.701952 sshd[3698]: Connection closed by 10.0.0.1 port 34016 Dec 13 13:08:33.702314 sshd-session[3696]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:33.705435 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:34016.service: Deactivated successfully. Dec 13 13:08:33.707100 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:08:33.707695 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:08:33.708763 systemd-logind[1457]: Removed session 15. Dec 13 13:08:38.713339 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:34022.service - OpenSSH per-connection server daemon (10.0.0.1:34022). Dec 13 13:08:38.758098 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 34022 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:38.759218 sshd-session[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:38.763173 systemd-logind[1457]: New session 16 of user core. Dec 13 13:08:38.776898 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:08:38.883647 sshd[3740]: Connection closed by 10.0.0.1 port 34022 Dec 13 13:08:38.883996 sshd-session[3738]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:38.887172 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:34022.service: Deactivated successfully. Dec 13 13:08:38.888935 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:08:38.890412 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:08:38.891499 systemd-logind[1457]: Removed session 16. Dec 13 13:08:43.895315 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:54486.service - OpenSSH per-connection server daemon (10.0.0.1:54486). Dec 13 13:08:43.940100 sshd[3775]: Accepted publickey for core from 10.0.0.1 port 54486 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:43.941272 sshd-session[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:43.945306 systemd-logind[1457]: New session 17 of user core. Dec 13 13:08:43.954891 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:08:44.060831 sshd[3777]: Connection closed by 10.0.0.1 port 54486 Dec 13 13:08:44.061164 sshd-session[3775]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:44.064569 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:54486.service: Deactivated successfully. Dec 13 13:08:44.066441 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:08:44.067169 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:08:44.067942 systemd-logind[1457]: Removed session 17. Dec 13 13:08:49.076831 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:54502.service - OpenSSH per-connection server daemon (10.0.0.1:54502). Dec 13 13:08:49.122301 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 54502 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:08:49.123568 sshd-session[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:08:49.127349 systemd-logind[1457]: New session 18 of user core. Dec 13 13:08:49.137907 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:08:49.247899 sshd[3812]: Connection closed by 10.0.0.1 port 54502 Dec 13 13:08:49.248382 sshd-session[3810]: pam_unix(sshd:session): session closed for user core Dec 13 13:08:49.251515 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:54502.service: Deactivated successfully. Dec 13 13:08:49.253443 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:08:49.254052 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:08:49.254972 systemd-logind[1457]: Removed session 18.