Dec 13 01:29:18.905633 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:29:18.905652 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:29:18.905662 kernel: KASLR enabled Dec 13 01:29:18.905668 kernel: efi: EFI v2.7 by EDK II Dec 13 01:29:18.905673 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:29:18.905679 kernel: random: crng init done Dec 13 01:29:18.905686 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:18.905692 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:29:18.905698 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:29:18.905705 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905711 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905717 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905723 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905729 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905737 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905744 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905751 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905757 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:18.905763 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:29:18.905769 kernel: NUMA: Failed to initialise from firmware Dec 13 01:29:18.905776 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:18.905782 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Dec 13 01:29:18.905788 kernel: Zone ranges: Dec 13 01:29:18.905794 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:18.905801 kernel: DMA32 empty Dec 13 01:29:18.905808 kernel: Normal empty Dec 13 01:29:18.905814 kernel: Movable zone start for each node Dec 13 01:29:18.905820 kernel: Early memory node ranges Dec 13 01:29:18.905827 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:29:18.905833 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:29:18.905839 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:29:18.905846 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:29:18.905852 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:29:18.905858 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:29:18.905864 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:29:18.905870 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:18.905877 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:29:18.905885 kernel: psci: probing for conduit method from ACPI. Dec 13 01:29:18.905891 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:29:18.905897 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:29:18.905911 kernel: psci: Trusted OS migration not required Dec 13 01:29:18.905918 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:29:18.905925 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:29:18.905933 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:29:18.905939 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:29:18.905946 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:29:18.905953 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:29:18.905960 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:29:18.905967 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:29:18.905974 kernel: CPU features: detected: Spectre-v4 Dec 13 01:29:18.905981 kernel: CPU features: detected: Spectre-BHB Dec 13 01:29:18.905987 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:29:18.905994 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:29:18.906002 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:29:18.906009 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:29:18.906015 kernel: alternatives: applying boot alternatives Dec 13 01:29:18.906023 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:29:18.906030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:18.906036 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:29:18.906043 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:18.906050 kernel: Fallback order for Node 0: 0 Dec 13 01:29:18.906057 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:29:18.906064 kernel: Policy zone: DMA Dec 13 01:29:18.906070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:18.906078 kernel: software IO TLB: area num 4. Dec 13 01:29:18.906085 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:29:18.906092 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Dec 13 01:29:18.906099 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:29:18.906105 kernel: trace event string verifier disabled Dec 13 01:29:18.906112 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:18.906119 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:18.906126 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:29:18.906133 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:18.906140 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:18.906147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:18.906154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:29:18.906162 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:29:18.906169 kernel: GICv3: 256 SPIs implemented Dec 13 01:29:18.906175 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:29:18.906182 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:29:18.906189 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:29:18.906195 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:29:18.906202 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:29:18.906209 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:29:18.906215 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:29:18.906222 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:29:18.906236 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:29:18.906246 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:18.906253 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:18.906259 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:29:18.906266 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:29:18.906273 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:29:18.906280 kernel: arm-pv: using stolen time PV Dec 13 01:29:18.906287 kernel: Console: colour dummy device 80x25 Dec 13 01:29:18.906293 kernel: ACPI: Core revision 20230628 Dec 13 01:29:18.906300 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:29:18.906307 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:18.906315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:18.906322 kernel: landlock: Up and running. Dec 13 01:29:18.906329 kernel: SELinux: Initializing. Dec 13 01:29:18.906336 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:29:18.906343 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:29:18.906350 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:29:18.906356 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:29:18.906363 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:18.906370 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:18.906378 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:29:18.906385 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:29:18.906391 kernel: Remapping and enabling EFI services. Dec 13 01:29:18.906398 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:18.906420 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:29:18.906428 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:29:18.906436 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:29:18.906443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:18.906449 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:29:18.906456 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:29:18.906465 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:29:18.906472 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:29:18.906483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:18.906491 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:29:18.906498 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:29:18.906505 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:29:18.906512 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:29:18.906519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:18.906527 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:29:18.906535 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:29:18.906542 kernel: SMP: Total of 4 processors activated. Dec 13 01:29:18.906549 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:29:18.906557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:29:18.906564 kernel: CPU features: detected: Common not Private translations Dec 13 01:29:18.906571 kernel: CPU features: detected: CRC32 instructions Dec 13 01:29:18.906578 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:29:18.906585 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:29:18.906594 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:29:18.906601 kernel: CPU features: detected: Privileged Access Never Dec 13 01:29:18.906608 kernel: CPU features: detected: RAS Extension Support Dec 13 01:29:18.906615 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:29:18.906622 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:29:18.906630 kernel: alternatives: applying system-wide alternatives Dec 13 01:29:18.906637 kernel: devtmpfs: initialized Dec 13 01:29:18.906644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:18.906652 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:29:18.906660 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:18.906667 kernel: SMBIOS 3.0.0 present. Dec 13 01:29:18.906675 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:29:18.906682 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:18.906689 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:29:18.906696 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:29:18.906704 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:29:18.906711 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:18.906718 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:18.906727 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:18.906734 kernel: cpuidle: using governor menu Dec 13 01:29:18.906741 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:29:18.906748 kernel: ASID allocator initialised with 32768 entries Dec 13 01:29:18.906755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:18.906762 kernel: Serial: AMBA PL011 UART driver Dec 13 01:29:18.906770 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:29:18.906777 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:29:18.906784 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:29:18.906792 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:18.906799 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:18.906807 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:29:18.906814 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:29:18.906821 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:18.906829 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:18.906836 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:29:18.906843 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:29:18.906850 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:18.906858 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:18.906865 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:18.906873 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:18.906880 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:18.906887 kernel: ACPI: Interpreter enabled Dec 13 01:29:18.906894 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:29:18.906901 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:29:18.906908 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:29:18.906915 kernel: printk: console [ttyAMA0] enabled Dec 13 01:29:18.906924 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:18.907048 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:18.907120 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:29:18.907183 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:29:18.907258 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:29:18.907323 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:29:18.907333 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:29:18.907342 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:18.907475 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:29:18.907542 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:29:18.907601 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:29:18.907657 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:18.907734 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:29:18.907809 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:29:18.907879 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:29:18.907942 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:29:18.908007 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:29:18.908071 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:29:18.908135 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:29:18.908198 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:29:18.908269 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:29:18.908330 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:29:18.908386 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:29:18.908395 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:29:18.908403 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:29:18.908420 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:29:18.908427 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:29:18.908434 kernel: iommu: Default domain type: Translated Dec 13 01:29:18.908442 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:29:18.908452 kernel: efivars: Registered efivars operations Dec 13 01:29:18.908459 kernel: vgaarb: loaded Dec 13 01:29:18.908466 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:29:18.908473 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:18.908480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:18.908488 kernel: pnp: PnP ACPI init Dec 13 01:29:18.908565 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:29:18.908576 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:29:18.908585 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:18.908592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:29:18.908600 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:29:18.908607 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:18.908615 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:18.908622 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:29:18.908629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:29:18.908636 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:29:18.908644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:29:18.908652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:18.908659 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:18.908666 kernel: kvm [1]: HYP mode not available Dec 13 01:29:18.908673 kernel: Initialise system trusted keyrings Dec 13 01:29:18.908680 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:29:18.908688 kernel: Key type asymmetric registered Dec 13 01:29:18.908695 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:18.908702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:29:18.908709 kernel: io scheduler mq-deadline registered Dec 13 01:29:18.908718 kernel: io scheduler kyber registered Dec 13 01:29:18.908725 kernel: io scheduler bfq registered Dec 13 01:29:18.908732 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:29:18.908739 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:29:18.908747 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:29:18.908814 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:29:18.908824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:18.908831 kernel: thunder_xcv, ver 1.0 Dec 13 01:29:18.908838 kernel: thunder_bgx, ver 1.0 Dec 13 01:29:18.908847 kernel: nicpf, ver 1.0 Dec 13 01:29:18.908854 kernel: nicvf, ver 1.0 Dec 13 01:29:18.908924 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:29:18.908985 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:29:18 UTC (1734053358) Dec 13 01:29:18.908995 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:18.909002 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:29:18.909009 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:29:18.909017 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:29:18.909025 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:18.909032 kernel: Segment Routing with IPv6 Dec 13 01:29:18.909040 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:18.909047 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:18.909054 kernel: Key type dns_resolver registered Dec 13 01:29:18.909061 kernel: registered taskstats version 1 Dec 13 01:29:18.909068 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:18.909075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:29:18.909083 kernel: Key type .fscrypt registered Dec 13 01:29:18.909091 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:18.909098 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:18.909105 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:18.909113 kernel: ima: No architecture policies found Dec 13 01:29:18.909120 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:29:18.909127 kernel: clk: Disabling unused clocks Dec 13 01:29:18.909134 kernel: Freeing unused kernel memory: 39360K Dec 13 01:29:18.909142 kernel: Run /init as init process Dec 13 01:29:18.909149 kernel: with arguments: Dec 13 01:29:18.909157 kernel: /init Dec 13 01:29:18.909164 kernel: with environment: Dec 13 01:29:18.909171 kernel: HOME=/ Dec 13 01:29:18.909178 kernel: TERM=linux Dec 13 01:29:18.909185 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:18.909194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:18.909203 systemd[1]: Detected virtualization kvm. Dec 13 01:29:18.909210 systemd[1]: Detected architecture arm64. Dec 13 01:29:18.909219 systemd[1]: Running in initrd. Dec 13 01:29:18.909227 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:18.909242 systemd[1]: Hostname set to . Dec 13 01:29:18.909250 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:18.909258 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:18.909266 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:18.909273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:18.909281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:18.909292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:18.909300 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:18.909308 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:18.909318 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:18.909325 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:18.909333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:18.909342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:18.909350 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:18.909358 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:18.909365 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:18.909373 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:18.909381 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:18.909388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:18.909396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:18.909404 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:18.909446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:18.909454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:18.909462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:18.909469 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:18.909477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:18.909485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:18.909492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:18.909500 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:18.909508 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:18.909517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:18.909525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:18.909532 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:18.909540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:18.909548 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:18.909574 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:29:18.909595 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:18.909603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.909612 systemd-journald[237]: Journal started Dec 13 01:29:18.909630 systemd-journald[237]: Runtime Journal (/run/log/journal/601d33c636a5400ca4c0d45556d4692e) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:29:18.901127 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:29:18.912289 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:18.912643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:18.918433 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:18.919812 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:29:18.920679 kernel: Bridge firewalling registered Dec 13 01:29:18.924576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.926292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:18.928347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:18.930088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:18.933361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:18.940731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:18.942895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:18.946301 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:18.948728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:18.957577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:18.959799 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:18.967877 dracut-cmdline[274]: dracut-dracut-053 Dec 13 01:29:18.970436 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:29:18.989378 systemd-resolved[275]: Positive Trust Anchors: Dec 13 01:29:18.989399 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:18.989443 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:18.996671 systemd-resolved[275]: Defaulting to hostname 'linux'. Dec 13 01:29:19.001520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:19.002621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:19.051457 kernel: SCSI subsystem initialized Dec 13 01:29:19.055431 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:19.066459 kernel: iscsi: registered transport (tcp) Dec 13 01:29:19.077430 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:19.077446 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:19.119938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:19.131553 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:19.148376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:19.148446 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:19.148472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:19.197441 kernel: raid6: neonx8 gen() 15771 MB/s Dec 13 01:29:19.214432 kernel: raid6: neonx4 gen() 15616 MB/s Dec 13 01:29:19.231428 kernel: raid6: neonx2 gen() 13223 MB/s Dec 13 01:29:19.248424 kernel: raid6: neonx1 gen() 10463 MB/s Dec 13 01:29:19.265430 kernel: raid6: int64x8 gen() 6940 MB/s Dec 13 01:29:19.282442 kernel: raid6: int64x4 gen() 7319 MB/s Dec 13 01:29:19.299438 kernel: raid6: int64x2 gen() 6111 MB/s Dec 13 01:29:19.316560 kernel: raid6: int64x1 gen() 5041 MB/s Dec 13 01:29:19.316595 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Dec 13 01:29:19.334565 kernel: raid6: .... xor() 11917 MB/s, rmw enabled Dec 13 01:29:19.334625 kernel: raid6: using neon recovery algorithm Dec 13 01:29:19.340908 kernel: xor: measuring software checksum speed Dec 13 01:29:19.340945 kernel: 8regs : 19726 MB/sec Dec 13 01:29:19.340954 kernel: 32regs : 18938 MB/sec Dec 13 01:29:19.341537 kernel: arm64_neon : 26927 MB/sec Dec 13 01:29:19.341561 kernel: xor: using function: arm64_neon (26927 MB/sec) Dec 13 01:29:19.394443 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:19.405160 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:19.417677 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:19.430339 systemd-udevd[458]: Using default interface naming scheme 'v255'. Dec 13 01:29:19.433501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:19.441562 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:19.453483 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Dec 13 01:29:19.485887 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:19.491579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:19.529934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:19.539610 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:19.549353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:19.551265 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:19.553130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:19.555738 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:19.562576 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:19.573454 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:19.578449 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:29:19.597608 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:29:19.597709 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:19.597721 kernel: GPT:9289727 != 19775487 Dec 13 01:29:19.597736 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:19.597746 kernel: GPT:9289727 != 19775487 Dec 13 01:29:19.597754 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:19.597763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:19.586295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:19.586492 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:19.595089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:19.596199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:19.596347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:19.597510 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:19.602621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:19.617433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:19.626439 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Dec 13 01:29:19.626476 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (510) Dec 13 01:29:19.627946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:19.638142 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:29:19.645397 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:29:19.649266 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:29:19.650559 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:29:19.656680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:29:19.668569 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:19.669896 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:19.674162 disk-uuid[559]: Primary Header is updated. Dec 13 01:29:19.674162 disk-uuid[559]: Secondary Entries is updated. Dec 13 01:29:19.674162 disk-uuid[559]: Secondary Header is updated. Dec 13 01:29:19.677556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:20.693541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:20.693595 disk-uuid[560]: The operation has completed successfully. Dec 13 01:29:20.723327 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:20.723464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:20.752582 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:20.755641 sh[574]: Success Dec 13 01:29:20.776435 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:29:20.827050 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:20.829336 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:20.833508 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:20.850503 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:29:20.850555 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:20.850567 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:20.850577 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:20.851916 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:20.855780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:20.857210 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:20.869617 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:20.871237 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:20.883541 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:20.883586 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:20.883598 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:20.887446 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:20.899643 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:20.899276 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:20.953698 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:20.965573 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:20.966803 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:20.970844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:21.010071 systemd-networkd[757]: lo: Link UP Dec 13 01:29:21.010085 systemd-networkd[757]: lo: Gained carrier Dec 13 01:29:21.010798 systemd-networkd[757]: Enumeration completed Dec 13 01:29:21.011321 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:21.011324 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:21.012101 systemd-networkd[757]: eth0: Link UP Dec 13 01:29:21.012104 systemd-networkd[757]: eth0: Gained carrier Dec 13 01:29:21.012111 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:21.012523 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:21.013857 systemd[1]: Reached target network.target - Network. Dec 13 01:29:21.030528 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:21.092981 ignition[751]: Ignition 2.19.0 Dec 13 01:29:21.092992 ignition[751]: Stage: fetch-offline Dec 13 01:29:21.093032 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.093042 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:21.093207 ignition[751]: parsed url from cmdline: "" Dec 13 01:29:21.093209 ignition[751]: no config URL provided Dec 13 01:29:21.093214 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:21.093227 ignition[751]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:21.093252 ignition[751]: op(1): [started] loading QEMU firmware config module Dec 13 01:29:21.093256 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:29:21.103120 ignition[751]: op(1): [finished] loading QEMU firmware config module Dec 13 01:29:21.140710 ignition[751]: parsing config with SHA512: 9e4fbd12e33fea36d2c05fd211bed2cd79310f980f0abd0e590d6b423af4f7695e1cdc1b01ff0f5e1c876919b51f633dbb0dd695b65b358fb6cc68587dbb6d22 Dec 13 01:29:21.147450 unknown[751]: fetched base config from "system" Dec 13 01:29:21.147461 unknown[751]: fetched user config from "qemu" Dec 13 01:29:21.147954 ignition[751]: fetch-offline: fetch-offline passed Dec 13 01:29:21.148029 ignition[751]: Ignition finished successfully Dec 13 01:29:21.150357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:21.152233 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:29:21.161543 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:21.174596 ignition[773]: Ignition 2.19.0 Dec 13 01:29:21.174605 ignition[773]: Stage: kargs Dec 13 01:29:21.174779 ignition[773]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.174788 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:21.175643 ignition[773]: kargs: kargs passed Dec 13 01:29:21.175690 ignition[773]: Ignition finished successfully Dec 13 01:29:21.179116 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:21.186811 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:21.203698 ignition[781]: Ignition 2.19.0 Dec 13 01:29:21.203708 ignition[781]: Stage: disks Dec 13 01:29:21.203877 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.206637 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:21.203887 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:21.208192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:21.204770 ignition[781]: disks: disks passed Dec 13 01:29:21.209897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:21.204815 ignition[781]: Ignition finished successfully Dec 13 01:29:21.211925 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:21.213783 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:21.215295 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:21.225601 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:21.235169 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:29:21.239613 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:21.250560 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:21.291428 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:21.291709 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:21.293015 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:21.311514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:21.313162 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:21.314643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:29:21.322853 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Dec 13 01:29:21.314690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:21.327672 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:21.327711 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:21.327738 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:21.314712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:21.325668 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:21.329309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:21.335440 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:21.336163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:21.374276 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:21.378438 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:21.383218 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:21.387606 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:21.469937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:21.481514 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:21.484430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:21.488426 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:21.510837 ignition[913]: INFO : Ignition 2.19.0 Dec 13 01:29:21.510837 ignition[913]: INFO : Stage: mount Dec 13 01:29:21.510837 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.510837 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:21.516372 ignition[913]: INFO : mount: mount passed Dec 13 01:29:21.516372 ignition[913]: INFO : Ignition finished successfully Dec 13 01:29:21.512507 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:21.515370 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:21.529519 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:21.849751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:21.858620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:21.864431 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Dec 13 01:29:21.867152 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:21.867182 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:21.867193 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:21.870420 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:21.871288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:21.892614 ignition[943]: INFO : Ignition 2.19.0 Dec 13 01:29:21.892614 ignition[943]: INFO : Stage: files Dec 13 01:29:21.894282 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.894282 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:21.894282 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:21.897613 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:21.897613 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:21.900540 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:21.900540 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:21.900540 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:21.900017 unknown[943]: wrote ssh authorized keys file for user: core Dec 13 01:29:21.905476 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:29:21.905476 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:29:21.961521 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:22.195096 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:29:22.195096 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:22.195096 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:29:22.454570 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:29:22.631882 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:29:22.754800 systemd-networkd[757]: eth0: Gained IPv6LL Dec 13 01:29:22.878144 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:29:23.219256 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:29:23.219256 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:23.224060 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:29:23.245343 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:29:23.266985 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:29:23.268620 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:29:23.268620 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:23.268620 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:23.268620 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:23.268620 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:23.268620 ignition[943]: INFO : files: files passed Dec 13 01:29:23.268620 ignition[943]: INFO : Ignition finished successfully Dec 13 01:29:23.270524 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:23.286625 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:23.289357 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:23.290800 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:23.290890 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:23.298433 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:29:23.301586 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:23.301586 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:23.304954 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:23.305386 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:23.308751 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:23.316594 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:23.337840 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:23.337950 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:23.340050 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:23.341892 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:23.343630 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:23.344533 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:23.360177 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:23.378589 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:23.386746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:23.387938 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:23.389983 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:23.391669 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:23.391800 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:23.394153 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:23.395116 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:23.396966 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:23.398781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:23.400597 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:23.402472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:23.404401 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:23.406558 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:23.408356 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:23.410316 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:23.411798 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:23.411931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:23.414014 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:23.415140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:23.416944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:23.420809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:23.422003 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:23.422133 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:23.424884 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:23.425003 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:23.426995 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:23.428597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:23.429517 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:23.430627 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:23.432117 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:23.433715 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:23.433811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:23.435902 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:23.435986 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:23.437367 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:23.437501 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:23.439681 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:23.439786 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:23.453608 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:23.455265 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:23.456096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:23.456230 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:23.458022 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:23.458124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:23.463237 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:23.464524 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:23.467030 ignition[998]: INFO : Ignition 2.19.0 Dec 13 01:29:23.467030 ignition[998]: INFO : Stage: umount Dec 13 01:29:23.468629 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:23.468629 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:23.468629 ignition[998]: INFO : umount: umount passed Dec 13 01:29:23.468629 ignition[998]: INFO : Ignition finished successfully Dec 13 01:29:23.470774 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:23.470881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:23.472156 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:23.473564 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:23.473626 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:23.476012 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:23.476060 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:23.477879 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:23.477927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:23.479640 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:23.479686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:23.483484 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:23.484976 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:23.487522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:23.495274 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:23.495403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:23.497561 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:23.497626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:23.502494 systemd-networkd[757]: eth0: DHCPv6 lease lost Dec 13 01:29:23.504551 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:23.504690 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:23.507916 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:23.507962 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:23.522518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:23.523289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:23.523356 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:23.531640 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:23.531698 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:23.532787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:23.532833 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:23.534782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:23.544893 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:23.545926 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:23.547487 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:23.547574 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:23.549393 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:23.549488 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:23.552607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:23.552662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:23.553758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:23.553793 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:23.556157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:23.556211 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:23.558558 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:23.558609 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:23.561287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:23.561335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:23.564229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:23.564281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:23.575614 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:23.576682 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:23.576750 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:23.578869 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:29:23.578915 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:23.580897 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:23.580945 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:23.583106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:23.583156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:23.585589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:23.585675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:23.588971 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:23.591124 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:23.602154 systemd[1]: Switching root. Dec 13 01:29:23.631860 systemd-journald[237]: Journal stopped Dec 13 01:29:24.323122 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:24.323173 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:24.323186 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:24.323196 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:24.323208 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:24.323229 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:24.323241 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:24.323250 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:24.323264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:24.323273 systemd[1]: Successfully loaded SELinux policy in 30.963ms. Dec 13 01:29:24.323291 kernel: audit: type=1403 audit(1734053363.778:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:24.323302 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.232ms. Dec 13 01:29:24.323318 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:24.323329 systemd[1]: Detected virtualization kvm. Dec 13 01:29:24.323339 systemd[1]: Detected architecture arm64. Dec 13 01:29:24.323349 systemd[1]: Detected first boot. Dec 13 01:29:24.323359 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:24.323369 zram_generator::config[1043]: No configuration found. Dec 13 01:29:24.323382 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:24.323393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:24.323403 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:24.323452 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:24.323465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:24.323475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:24.323486 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:24.323497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:24.323511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:24.323524 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:24.323535 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:24.323547 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:24.323558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:24.323569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:24.323579 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:24.323589 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:24.323600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:24.323611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:24.323622 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:29:24.323632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:24.323643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:24.323653 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:24.323664 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:24.323674 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:24.323685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:24.323698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:24.323708 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:24.323718 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:24.323729 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:24.323739 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:24.323750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:24.323761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:24.323772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:24.323782 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:24.323792 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:24.323804 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:24.323815 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:24.323825 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:24.323835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:24.323845 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:24.323856 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:24.323867 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:24.323878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:24.323890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:24.323900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:24.323910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:24.323921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:24.323932 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:24.323943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:24.323954 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:24.323964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:24.323975 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:24.323987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:24.323999 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:24.324009 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:24.324019 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:24.324029 kernel: fuse: init (API version 7.39) Dec 13 01:29:24.324038 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:24.324048 kernel: loop: module loaded Dec 13 01:29:24.324058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:24.324068 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:24.324080 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:24.324091 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:24.324101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:24.324112 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:24.324122 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:24.324132 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:24.324158 systemd-journald[1110]: Collecting audit messages is disabled. Dec 13 01:29:24.324186 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:24.324197 systemd-journald[1110]: Journal started Dec 13 01:29:24.324224 systemd-journald[1110]: Runtime Journal (/run/log/journal/601d33c636a5400ca4c0d45556d4692e) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:29:24.125782 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:24.141402 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:29:24.141752 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:24.327424 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:24.327924 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:24.328883 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:24.329980 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:24.331153 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:24.333448 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:24.334635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:24.336111 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:24.336283 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:24.337668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:24.338478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:24.339829 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:24.339974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:24.341281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:24.341452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:24.342838 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:24.342983 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:24.344300 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:24.344582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:24.345958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:24.347303 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:24.348838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:24.360578 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:24.371502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:24.373472 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:24.374527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:24.374565 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:24.376454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:24.378554 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:24.380526 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:24.381560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:24.383066 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:24.385031 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:24.386280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:24.389637 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:24.390755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:24.392109 systemd-journald[1110]: Time spent on flushing to /var/log/journal/601d33c636a5400ca4c0d45556d4692e is 24.196ms for 858 entries. Dec 13 01:29:24.392109 systemd-journald[1110]: System Journal (/var/log/journal/601d33c636a5400ca4c0d45556d4692e) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:29:24.427421 systemd-journald[1110]: Received client request to flush runtime journal. Dec 13 01:29:24.392641 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:24.399612 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:24.401826 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:24.406457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:24.407898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:24.409228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:24.411621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:24.416689 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:24.418801 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:24.431536 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:24.437809 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:24.439577 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:24.441634 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Dec 13 01:29:24.441648 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Dec 13 01:29:24.443658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:24.446396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:24.450437 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:29:24.460683 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:24.464264 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:24.465029 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:24.466427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:24.471317 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:24.497454 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:29:24.506858 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:24.513575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:24.526434 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 01:29:24.527377 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Dec 13 01:29:24.527396 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Dec 13 01:29:24.532863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:24.557446 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:29:24.563425 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:29:24.568438 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:29:24.572755 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:29:24.573143 (sd-merge)[1183]: Merged extensions into '/usr'. Dec 13 01:29:24.576314 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:24.576331 systemd[1]: Reloading... Dec 13 01:29:24.624428 zram_generator::config[1211]: No configuration found. Dec 13 01:29:24.658512 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:24.719974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:24.755980 systemd[1]: Reloading finished in 179 ms. Dec 13 01:29:24.795741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:24.797154 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:24.814600 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:24.816517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:24.825455 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:24.825474 systemd[1]: Reloading... Dec 13 01:29:24.834102 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:24.834372 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:24.835025 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:24.835253 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 13 01:29:24.835305 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Dec 13 01:29:24.838655 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:24.838668 systemd-tmpfiles[1246]: Skipping /boot Dec 13 01:29:24.846706 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:24.846723 systemd-tmpfiles[1246]: Skipping /boot Dec 13 01:29:24.867543 zram_generator::config[1273]: No configuration found. Dec 13 01:29:24.947942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:24.983588 systemd[1]: Reloading finished in 157 ms. Dec 13 01:29:25.000495 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:25.012869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:25.020847 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:25.023651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:25.025807 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:25.029771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:25.036753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:25.040896 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:25.045156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.047876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.050663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.054021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.056652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.059642 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:25.061864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.061990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.065813 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:25.069039 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.074744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.078567 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Dec 13 01:29:25.079060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.079225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.084020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.084253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.085959 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:25.089134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.090652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.093616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.096718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.097775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.100435 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:25.102494 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:25.106845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.107066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.109558 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.109693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.111320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:25.113333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:25.120259 augenrules[1352]: No rules Dec 13 01:29:25.120742 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:25.122743 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:25.140435 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:25.141730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.141869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.151277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.158635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:25.161712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.166928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.169658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.172027 systemd-resolved[1314]: Positive Trust Anchors: Dec 13 01:29:25.172046 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:25.172078 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:25.173822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:25.176604 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1368) Dec 13 01:29:25.181077 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:25.183493 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:25.183899 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:25.185461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:25.187733 systemd-resolved[1314]: Defaulting to hostname 'linux'. Dec 13 01:29:25.190592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:25.194961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.196462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.198397 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:29:25.201475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1349) Dec 13 01:29:25.204802 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1368) Dec 13 01:29:25.214623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:25.217511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.220556 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.220729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.222730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.235643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:29:25.243643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:25.250820 systemd-networkd[1382]: lo: Link UP Dec 13 01:29:25.250831 systemd-networkd[1382]: lo: Gained carrier Dec 13 01:29:25.251629 systemd-networkd[1382]: Enumeration completed Dec 13 01:29:25.251732 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:25.252265 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.252275 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:25.252895 systemd[1]: Reached target network.target - Network. Dec 13 01:29:25.253158 systemd-networkd[1382]: eth0: Link UP Dec 13 01:29:25.253167 systemd-networkd[1382]: eth0: Gained carrier Dec 13 01:29:25.253182 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.255133 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:25.270232 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:25.273443 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:25.274966 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:25.276533 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:29:25.276579 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2024-12-13 01:29:24.943780 UTC. Dec 13 01:29:25.279236 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:25.287649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.296586 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:25.298896 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.319448 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.325506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.354884 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:25.356334 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:25.358577 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:25.359620 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:25.360693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:25.362004 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:25.363081 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:25.364316 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:25.365512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:25.365547 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:25.366379 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:25.368575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:25.371786 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:25.382451 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:25.384623 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.386157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:25.387347 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:25.388307 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:25.389283 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:25.389316 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:25.390193 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:25.391891 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.392603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:25.394916 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:25.397660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:25.399659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:25.401820 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:25.405297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:25.407241 jq[1415]: false Dec 13 01:29:25.407777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:25.410583 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:25.414033 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:25.418583 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:25.418971 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:25.419682 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:25.422576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:25.425392 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:25.429758 extend-filesystems[1416]: Found loop3 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found loop4 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found loop5 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda1 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda2 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda3 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found usr Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda4 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda6 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda7 Dec 13 01:29:25.429758 extend-filesystems[1416]: Found vda9 Dec 13 01:29:25.429758 extend-filesystems[1416]: Checking size of /dev/vda9 Dec 13 01:29:25.425854 dbus-daemon[1414]: [system] SELinux support is enabled Dec 13 01:29:25.428662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:25.474626 extend-filesystems[1416]: Resized partition /dev/vda9 Dec 13 01:29:25.476691 update_engine[1425]: I20241213 01:29:25.476392 1425 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:25.433655 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:25.478518 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:25.483385 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:29:25.483423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1367) Dec 13 01:29:25.483438 jq[1430]: true Dec 13 01:29:25.433819 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:25.435561 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:25.435720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:25.490934 update_engine[1425]: I20241213 01:29:25.490144 1425 update_check_scheduler.cc:74] Next update check in 6m59s Dec 13 01:29:25.490973 tar[1437]: linux-arm64/helm Dec 13 01:29:25.438293 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:25.438559 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:25.491394 jq[1439]: true Dec 13 01:29:25.449398 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:25.449618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:25.449669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:25.451978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:25.452005 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:25.488539 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:25.498692 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:29:25.495673 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:25.508476 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:29:25.508775 systemd-logind[1422]: New seat seat0. Dec 13 01:29:25.509080 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:29:25.509080 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:29:25.509080 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:29:25.527771 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Dec 13 01:29:25.510600 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:25.511920 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:25.516733 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:25.559445 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:25.561445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:25.566657 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:29:25.593713 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:25.679427 containerd[1440]: time="2024-12-13T01:29:25.679314680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:25.708822 containerd[1440]: time="2024-12-13T01:29:25.708772320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710339440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710372600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710394720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710553560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710570320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710620280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710632760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710795040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710810560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710825840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711482 containerd[1440]: time="2024-12-13T01:29:25.710835120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.710905640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.711100520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.711199720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.711223720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.711300360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:25.711785 containerd[1440]: time="2024-12-13T01:29:25.711341440Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:25.715363 containerd[1440]: time="2024-12-13T01:29:25.715331440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:25.715553 containerd[1440]: time="2024-12-13T01:29:25.715533200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:25.715683 containerd[1440]: time="2024-12-13T01:29:25.715667440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:25.715742 containerd[1440]: time="2024-12-13T01:29:25.715729080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:25.715860 containerd[1440]: time="2024-12-13T01:29:25.715843600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:25.716046 containerd[1440]: time="2024-12-13T01:29:25.716027080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:25.716566 containerd[1440]: time="2024-12-13T01:29:25.716543240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:25.716752 containerd[1440]: time="2024-12-13T01:29:25.716731280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:25.716887 containerd[1440]: time="2024-12-13T01:29:25.716869200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:25.716966 containerd[1440]: time="2024-12-13T01:29:25.716951120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:25.717092 containerd[1440]: time="2024-12-13T01:29:25.717074640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717162 containerd[1440]: time="2024-12-13T01:29:25.717148360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717227 containerd[1440]: time="2024-12-13T01:29:25.717204000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717332 containerd[1440]: time="2024-12-13T01:29:25.717317240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717394 containerd[1440]: time="2024-12-13T01:29:25.717379520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717536 containerd[1440]: time="2024-12-13T01:29:25.717517320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717594 containerd[1440]: time="2024-12-13T01:29:25.717581400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717720 containerd[1440]: time="2024-12-13T01:29:25.717703640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:25.717796 containerd[1440]: time="2024-12-13T01:29:25.717782360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717889280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717910280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717923600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717935800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717950480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717962320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717976120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.717989360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718004400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718016080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718034920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718048320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718063680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718086160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718325 containerd[1440]: time="2024-12-13T01:29:25.718098120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.718635 containerd[1440]: time="2024-12-13T01:29:25.718116960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:25.719339 containerd[1440]: time="2024-12-13T01:29:25.719312840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:25.719708 containerd[1440]: time="2024-12-13T01:29:25.719632200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719761600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719781320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719793560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719807000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719816560Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:25.720449 containerd[1440]: time="2024-12-13T01:29:25.719828720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:25.720612 containerd[1440]: time="2024-12-13T01:29:25.720168800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:25.720612 containerd[1440]: time="2024-12-13T01:29:25.720234880Z" level=info msg="Connect containerd service" Dec 13 01:29:25.720612 containerd[1440]: time="2024-12-13T01:29:25.720261760Z" level=info msg="using legacy CRI server" Dec 13 01:29:25.720612 containerd[1440]: time="2024-12-13T01:29:25.720268160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:25.720612 containerd[1440]: time="2024-12-13T01:29:25.720344480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:25.720990 containerd[1440]: time="2024-12-13T01:29:25.720950040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:25.721413 containerd[1440]: time="2024-12-13T01:29:25.721386000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:25.721450 containerd[1440]: time="2024-12-13T01:29:25.721413000Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:25.721471 containerd[1440]: time="2024-12-13T01:29:25.721449440Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:25.721471 containerd[1440]: time="2024-12-13T01:29:25.721464880Z" level=info msg="Start recovering state" Dec 13 01:29:25.725725 containerd[1440]: time="2024-12-13T01:29:25.721736440Z" level=info msg="Start event monitor" Dec 13 01:29:25.725725 containerd[1440]: time="2024-12-13T01:29:25.721765800Z" level=info msg="Start snapshots syncer" Dec 13 01:29:25.725725 containerd[1440]: time="2024-12-13T01:29:25.721777480Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:25.725725 containerd[1440]: time="2024-12-13T01:29:25.721786080Z" level=info msg="Start streaming server" Dec 13 01:29:25.725725 containerd[1440]: time="2024-12-13T01:29:25.722739880Z" level=info msg="containerd successfully booted in 0.044903s" Dec 13 01:29:25.724510 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:25.857685 tar[1437]: linux-arm64/LICENSE Dec 13 01:29:25.857772 tar[1437]: linux-arm64/README.md Dec 13 01:29:25.876453 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:26.182370 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:26.202457 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:26.212646 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:26.217582 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:26.219443 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:26.221665 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:26.233453 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:26.235804 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:26.237634 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:29:26.238964 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:27.106611 systemd-networkd[1382]: eth0: Gained IPv6LL Dec 13 01:29:27.108935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:27.110656 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:27.123615 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:29:27.125844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:27.127836 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:27.141368 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:29:27.143511 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:29:27.144828 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:27.145952 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:27.572485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:27.573751 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:27.576059 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:27.576537 systemd[1]: Startup finished in 554ms (kernel) + 5.078s (initrd) + 3.829s (userspace) = 9.462s. Dec 13 01:29:28.020654 kubelet[1526]: E1213 01:29:28.020481 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:28.023159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:28.023289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:30.890422 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:29:30.891642 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Dec 13 01:29:30.958778 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:30.963756 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:30.976453 systemd-logind[1422]: New session 1 of user core. Dec 13 01:29:30.977115 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:29:30.986779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:29:30.995772 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:29:30.997912 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:29:31.003843 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:29:31.077522 systemd[1544]: Queued start job for default target default.target. Dec 13 01:29:31.091314 systemd[1544]: Created slice app.slice - User Application Slice. Dec 13 01:29:31.091362 systemd[1544]: Reached target paths.target - Paths. Dec 13 01:29:31.091374 systemd[1544]: Reached target timers.target - Timers. Dec 13 01:29:31.092523 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:29:31.101264 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:29:31.101310 systemd[1544]: Reached target sockets.target - Sockets. Dec 13 01:29:31.101321 systemd[1544]: Reached target basic.target - Basic System. Dec 13 01:29:31.101355 systemd[1544]: Reached target default.target - Main User Target. Dec 13 01:29:31.101379 systemd[1544]: Startup finished in 92ms. Dec 13 01:29:31.101703 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:29:31.102920 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:29:31.168605 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:44050.service - OpenSSH per-connection server daemon (10.0.0.1:44050). Dec 13 01:29:31.200777 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 44050 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.201905 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.205401 systemd-logind[1422]: New session 2 of user core. Dec 13 01:29:31.218530 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:29:31.268498 sshd[1555]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.278625 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:44050.service: Deactivated successfully. Dec 13 01:29:31.280680 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:29:31.282192 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:29:31.294721 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Dec 13 01:29:31.295426 systemd-logind[1422]: Removed session 2. Dec 13 01:29:31.323706 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.324802 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.328473 systemd-logind[1422]: New session 3 of user core. Dec 13 01:29:31.332536 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:29:31.378763 sshd[1562]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.387699 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:44052.service: Deactivated successfully. Dec 13 01:29:31.390646 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:29:31.391846 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:29:31.392922 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:44064.service - OpenSSH per-connection server daemon (10.0.0.1:44064). Dec 13 01:29:31.393561 systemd-logind[1422]: Removed session 3. Dec 13 01:29:31.426000 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 44064 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.427126 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.430886 systemd-logind[1422]: New session 4 of user core. Dec 13 01:29:31.439558 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:29:31.490324 sshd[1569]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.503616 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:44064.service: Deactivated successfully. Dec 13 01:29:31.505033 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:29:31.506317 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:29:31.507240 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). Dec 13 01:29:31.507915 systemd-logind[1422]: Removed session 4. Dec 13 01:29:31.539766 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.540965 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.544307 systemd-logind[1422]: New session 5 of user core. Dec 13 01:29:31.557558 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:29:31.622960 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:29:31.623238 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:31.638176 sudo[1579]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:31.639739 sshd[1576]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.662607 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:44074.service: Deactivated successfully. Dec 13 01:29:31.663884 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:29:31.665100 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:29:31.666461 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:44076.service - OpenSSH per-connection server daemon (10.0.0.1:44076). Dec 13 01:29:31.667170 systemd-logind[1422]: Removed session 5. Dec 13 01:29:31.699247 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 44076 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.700291 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.703945 systemd-logind[1422]: New session 6 of user core. Dec 13 01:29:31.709594 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:29:31.757757 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:29:31.758019 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:31.760738 sudo[1588]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:31.764807 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:29:31.765248 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:31.798658 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:31.799639 auditctl[1591]: No rules Dec 13 01:29:31.800394 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:29:31.800609 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:31.801985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:31.823179 augenrules[1609]: No rules Dec 13 01:29:31.824239 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:31.827283 sudo[1587]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:31.829358 sshd[1584]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:31.838476 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:44076.service: Deactivated successfully. Dec 13 01:29:31.839813 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:29:31.840328 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:29:31.841856 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:44088.service - OpenSSH per-connection server daemon (10.0.0.1:44088). Dec 13 01:29:31.842548 systemd-logind[1422]: Removed session 6. Dec 13 01:29:31.873694 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 44088 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:29:31.874756 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:31.877748 systemd-logind[1422]: New session 7 of user core. Dec 13 01:29:31.886564 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:29:31.934810 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:29:31.935319 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:32.240672 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:29:32.240716 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:29:32.491049 dockerd[1638]: time="2024-12-13T01:29:32.490924248Z" level=info msg="Starting up" Dec 13 01:29:32.623224 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1670645935-merged.mount: Deactivated successfully. Dec 13 01:29:32.639221 dockerd[1638]: time="2024-12-13T01:29:32.639164001Z" level=info msg="Loading containers: start." Dec 13 01:29:32.724465 kernel: Initializing XFRM netlink socket Dec 13 01:29:32.788931 systemd-networkd[1382]: docker0: Link UP Dec 13 01:29:32.803581 dockerd[1638]: time="2024-12-13T01:29:32.803540895Z" level=info msg="Loading containers: done." Dec 13 01:29:32.816657 dockerd[1638]: time="2024-12-13T01:29:32.816449438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:29:32.816657 dockerd[1638]: time="2024-12-13T01:29:32.816546669Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:29:32.816876 dockerd[1638]: time="2024-12-13T01:29:32.816856497Z" level=info msg="Daemon has completed initialization" Dec 13 01:29:32.846241 dockerd[1638]: time="2024-12-13T01:29:32.846106771Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:29:32.846359 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:29:33.511342 containerd[1440]: time="2024-12-13T01:29:33.511112272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:29:33.621257 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2243286299-merged.mount: Deactivated successfully. Dec 13 01:29:34.185185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820734238.mount: Deactivated successfully. Dec 13 01:29:35.764527 containerd[1440]: time="2024-12-13T01:29:35.764481435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.765000 containerd[1440]: time="2024-12-13T01:29:35.764967402Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 01:29:35.766446 containerd[1440]: time="2024-12-13T01:29:35.765795833Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.769506 containerd[1440]: time="2024-12-13T01:29:35.769469836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:35.770781 containerd[1440]: time="2024-12-13T01:29:35.770746946Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.259593398s" Dec 13 01:29:35.770833 containerd[1440]: time="2024-12-13T01:29:35.770787710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:29:35.788934 containerd[1440]: time="2024-12-13T01:29:35.788892759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:29:37.803311 containerd[1440]: time="2024-12-13T01:29:37.803256557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.803752 containerd[1440]: time="2024-12-13T01:29:37.803724355Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 01:29:37.804631 containerd[1440]: time="2024-12-13T01:29:37.804605559Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.807356 containerd[1440]: time="2024-12-13T01:29:37.807304790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:37.808861 containerd[1440]: time="2024-12-13T01:29:37.808822482Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.01988609s" Dec 13 01:29:37.808906 containerd[1440]: time="2024-12-13T01:29:37.808862416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:29:37.827211 containerd[1440]: time="2024-12-13T01:29:37.827177955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:29:38.273637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:38.284583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:38.369563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:38.372947 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:38.411805 kubelet[1873]: E1213 01:29:38.411749 1873 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:38.416548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:38.416676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:39.056505 containerd[1440]: time="2024-12-13T01:29:39.056456895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.057366 containerd[1440]: time="2024-12-13T01:29:39.057149938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 01:29:39.058048 containerd[1440]: time="2024-12-13T01:29:39.058009313Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.061694 containerd[1440]: time="2024-12-13T01:29:39.061655910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.063389 containerd[1440]: time="2024-12-13T01:29:39.063351352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.236136294s" Dec 13 01:29:39.063448 containerd[1440]: time="2024-12-13T01:29:39.063389511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:29:39.082391 containerd[1440]: time="2024-12-13T01:29:39.082306604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:29:40.093999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041771210.mount: Deactivated successfully. Dec 13 01:29:40.416360 containerd[1440]: time="2024-12-13T01:29:40.415992697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:40.416993 containerd[1440]: time="2024-12-13T01:29:40.416834385Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 01:29:40.417790 containerd[1440]: time="2024-12-13T01:29:40.417599805Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:40.419461 containerd[1440]: time="2024-12-13T01:29:40.419388421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:40.420371 containerd[1440]: time="2024-12-13T01:29:40.420312616Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.337966534s" Dec 13 01:29:40.420371 containerd[1440]: time="2024-12-13T01:29:40.420357248Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:29:40.438520 containerd[1440]: time="2024-12-13T01:29:40.438495419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:29:40.994348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753277782.mount: Deactivated successfully. Dec 13 01:29:41.530966 containerd[1440]: time="2024-12-13T01:29:41.529882551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.530966 containerd[1440]: time="2024-12-13T01:29:41.530311928Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:29:41.531475 containerd[1440]: time="2024-12-13T01:29:41.531446883Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.534582 containerd[1440]: time="2024-12-13T01:29:41.534545995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.536340 containerd[1440]: time="2024-12-13T01:29:41.536311553Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.097693676s" Dec 13 01:29:41.536476 containerd[1440]: time="2024-12-13T01:29:41.536455699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:29:41.555292 containerd[1440]: time="2024-12-13T01:29:41.555259789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:29:42.020497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321102230.mount: Deactivated successfully. Dec 13 01:29:42.024831 containerd[1440]: time="2024-12-13T01:29:42.024792274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.025542 containerd[1440]: time="2024-12-13T01:29:42.025496240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:29:42.026399 containerd[1440]: time="2024-12-13T01:29:42.026083304Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.028432 containerd[1440]: time="2024-12-13T01:29:42.028386181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.029150 containerd[1440]: time="2024-12-13T01:29:42.029119165Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 473.659298ms" Dec 13 01:29:42.029214 containerd[1440]: time="2024-12-13T01:29:42.029152122Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:29:42.047676 containerd[1440]: time="2024-12-13T01:29:42.047459233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:29:42.731383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689716866.mount: Deactivated successfully. Dec 13 01:29:44.898532 containerd[1440]: time="2024-12-13T01:29:44.898481025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:44.899621 containerd[1440]: time="2024-12-13T01:29:44.899587530Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 01:29:44.901476 containerd[1440]: time="2024-12-13T01:29:44.900759069Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:44.904646 containerd[1440]: time="2024-12-13T01:29:44.904591010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:44.905879 containerd[1440]: time="2024-12-13T01:29:44.905847549Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.858352284s" Dec 13 01:29:44.905917 containerd[1440]: time="2024-12-13T01:29:44.905879429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:29:48.667956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:48.678569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:48.787372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:48.791634 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:48.828394 kubelet[2092]: E1213 01:29:48.828343 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:48.831286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:48.831452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:49.553095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:49.564669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:49.583642 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-7.scope)... Dec 13 01:29:49.583667 systemd[1]: Reloading... Dec 13 01:29:49.647466 zram_generator::config[2149]: No configuration found. Dec 13 01:29:49.793160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:49.846199 systemd[1]: Reloading finished in 262 ms. Dec 13 01:29:49.886084 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:29:49.886152 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:29:49.886367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:49.889690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:50.116237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:50.120325 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:50.170871 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:50.170871 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:50.170871 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:50.171234 kubelet[2191]: I1213 01:29:50.170915 2191 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:51.214830 kubelet[2191]: I1213 01:29:51.213546 2191 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:29:51.214830 kubelet[2191]: I1213 01:29:51.213576 2191 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:51.214830 kubelet[2191]: I1213 01:29:51.213775 2191 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:29:51.231395 kubelet[2191]: I1213 01:29:51.231284 2191 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:51.232925 kubelet[2191]: E1213 01:29:51.232890 2191 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.244566 kubelet[2191]: I1213 01:29:51.244543 2191 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:51.245557 kubelet[2191]: I1213 01:29:51.245523 2191 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:51.245755 kubelet[2191]: I1213 01:29:51.245728 2191 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:51.245755 kubelet[2191]: I1213 01:29:51.245753 2191 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:51.245860 kubelet[2191]: I1213 01:29:51.245762 2191 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:51.246917 kubelet[2191]: I1213 01:29:51.246833 2191 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:51.250639 kubelet[2191]: I1213 01:29:51.250579 2191 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:29:51.250639 kubelet[2191]: I1213 01:29:51.250607 2191 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:51.250639 kubelet[2191]: I1213 01:29:51.250627 2191 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:51.250639 kubelet[2191]: I1213 01:29:51.250644 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:51.254087 kubelet[2191]: W1213 01:29:51.253279 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.254087 kubelet[2191]: E1213 01:29:51.253357 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.254087 kubelet[2191]: I1213 01:29:51.253477 2191 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:51.254087 kubelet[2191]: W1213 01:29:51.253492 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.254087 kubelet[2191]: E1213 01:29:51.253533 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.254087 kubelet[2191]: I1213 01:29:51.253945 2191 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:51.255489 kubelet[2191]: W1213 01:29:51.254473 2191 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:29:51.258033 kubelet[2191]: I1213 01:29:51.258004 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:51.261114 kubelet[2191]: I1213 01:29:51.261079 2191 server.go:1256] "Started kubelet" Dec 13 01:29:51.261431 kubelet[2191]: I1213 01:29:51.261394 2191 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:51.262426 kubelet[2191]: I1213 01:29:51.262344 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:51.264595 kubelet[2191]: I1213 01:29:51.263463 2191 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:51.264595 kubelet[2191]: E1213 01:29:51.264173 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:51.264595 kubelet[2191]: I1213 01:29:51.264205 2191 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:51.264595 kubelet[2191]: I1213 01:29:51.264223 2191 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:29:51.264595 kubelet[2191]: I1213 01:29:51.264365 2191 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:29:51.264595 kubelet[2191]: I1213 01:29:51.264460 2191 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:29:51.264595 kubelet[2191]: E1213 01:29:51.264497 2191 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810985f377cf3cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:29:51.257891791 +0000 UTC m=+1.133967204,LastTimestamp:2024-12-13 01:29:51.257891791 +0000 UTC m=+1.133967204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:29:51.264860 kubelet[2191]: W1213 01:29:51.264813 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.264896 kubelet[2191]: E1213 01:29:51.264866 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.265651 kubelet[2191]: E1213 01:29:51.265444 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="200ms" Dec 13 01:29:51.265852 kubelet[2191]: E1213 01:29:51.265828 2191 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:51.266377 kubelet[2191]: I1213 01:29:51.266349 2191 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:51.266468 kubelet[2191]: I1213 01:29:51.266446 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:51.267390 kubelet[2191]: I1213 01:29:51.267369 2191 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:51.278140 kubelet[2191]: I1213 01:29:51.278099 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:51.279182 kubelet[2191]: I1213 01:29:51.279151 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:51.279182 kubelet[2191]: I1213 01:29:51.279172 2191 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:51.279182 kubelet[2191]: I1213 01:29:51.279187 2191 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:29:51.279300 kubelet[2191]: E1213 01:29:51.279232 2191 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:51.282483 kubelet[2191]: W1213 01:29:51.282385 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.282483 kubelet[2191]: E1213 01:29:51.282455 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:51.282927 kubelet[2191]: I1213 01:29:51.282900 2191 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:51.282927 kubelet[2191]: I1213 01:29:51.282917 2191 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:51.283007 kubelet[2191]: I1213 01:29:51.282934 2191 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:51.304319 kubelet[2191]: I1213 01:29:51.304283 2191 policy_none.go:49] "None policy: Start" Dec 13 01:29:51.305070 kubelet[2191]: I1213 01:29:51.305053 2191 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:51.305499 kubelet[2191]: I1213 01:29:51.305176 2191 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:51.311633 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:29:51.321762 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:29:51.324430 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:29:51.336382 kubelet[2191]: I1213 01:29:51.336207 2191 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:51.336524 kubelet[2191]: I1213 01:29:51.336476 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:51.337577 kubelet[2191]: E1213 01:29:51.337503 2191 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:29:51.365852 kubelet[2191]: I1213 01:29:51.365826 2191 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:51.368002 kubelet[2191]: E1213 01:29:51.367984 2191 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Dec 13 01:29:51.380289 kubelet[2191]: I1213 01:29:51.380092 2191 topology_manager.go:215] "Topology Admit Handler" podUID="8e4e8ff7115a5f15d40b054d456067ed" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:29:51.381103 kubelet[2191]: I1213 01:29:51.381032 2191 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:29:51.382084 kubelet[2191]: I1213 01:29:51.382062 2191 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:29:51.387062 systemd[1]: Created slice kubepods-burstable-pod8e4e8ff7115a5f15d40b054d456067ed.slice - libcontainer container kubepods-burstable-pod8e4e8ff7115a5f15d40b054d456067ed.slice. Dec 13 01:29:51.404669 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:29:51.420607 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:29:51.466057 kubelet[2191]: E1213 01:29:51.465972 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="400ms" Dec 13 01:29:51.566502 kubelet[2191]: I1213 01:29:51.566462 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:51.566694 kubelet[2191]: I1213 01:29:51.566541 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:51.566694 kubelet[2191]: I1213 01:29:51.566617 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:51.566694 kubelet[2191]: I1213 01:29:51.566653 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:51.566796 kubelet[2191]: I1213 01:29:51.566699 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:51.566796 kubelet[2191]: I1213 01:29:51.566721 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:51.566796 kubelet[2191]: I1213 01:29:51.566739 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:51.566796 kubelet[2191]: I1213 01:29:51.566757 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:51.566796 kubelet[2191]: I1213 01:29:51.566777 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:29:51.569438 kubelet[2191]: I1213 01:29:51.569385 2191 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:51.569764 kubelet[2191]: E1213 01:29:51.569744 2191 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Dec 13 01:29:51.704349 kubelet[2191]: E1213 01:29:51.704294 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:51.706860 containerd[1440]: time="2024-12-13T01:29:51.706709476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e4e8ff7115a5f15d40b054d456067ed,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:51.717037 kubelet[2191]: E1213 01:29:51.716915 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:51.717307 containerd[1440]: time="2024-12-13T01:29:51.717256044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:51.723946 kubelet[2191]: E1213 01:29:51.723916 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:51.725075 containerd[1440]: time="2024-12-13T01:29:51.725030071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:51.867334 kubelet[2191]: E1213 01:29:51.867281 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="800ms" Dec 13 01:29:51.970857 kubelet[2191]: I1213 01:29:51.970729 2191 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:51.971123 kubelet[2191]: E1213 01:29:51.971101 2191 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Dec 13 01:29:52.304346 kubelet[2191]: W1213 01:29:52.304189 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.304346 kubelet[2191]: E1213 01:29:52.304252 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.386074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758308530.mount: Deactivated successfully. Dec 13 01:29:52.389539 containerd[1440]: time="2024-12-13T01:29:52.388600697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:52.390381 containerd[1440]: time="2024-12-13T01:29:52.390346519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:52.391933 containerd[1440]: time="2024-12-13T01:29:52.391909178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:29:52.392702 containerd[1440]: time="2024-12-13T01:29:52.392643748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:52.393376 containerd[1440]: time="2024-12-13T01:29:52.393340887Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:52.394590 containerd[1440]: time="2024-12-13T01:29:52.394566541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:52.397822 containerd[1440]: time="2024-12-13T01:29:52.397781823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 690.999214ms" Dec 13 01:29:52.398399 containerd[1440]: time="2024-12-13T01:29:52.398300872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:52.399556 containerd[1440]: time="2024-12-13T01:29:52.399518497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 674.431668ms" Dec 13 01:29:52.400959 kubelet[2191]: W1213 01:29:52.400909 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.401056 kubelet[2191]: E1213 01:29:52.401045 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.401124 containerd[1440]: time="2024-12-13T01:29:52.401084472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:52.404941 containerd[1440]: time="2024-12-13T01:29:52.404904411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 687.573597ms" Dec 13 01:29:52.472622 kubelet[2191]: W1213 01:29:52.472545 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.472622 kubelet[2191]: E1213 01:29:52.472621 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.537872 containerd[1440]: time="2024-12-13T01:29:52.537776404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:52.538034 containerd[1440]: time="2024-12-13T01:29:52.537888819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:52.538034 containerd[1440]: time="2024-12-13T01:29:52.537923614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.538104 containerd[1440]: time="2024-12-13T01:29:52.538037626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.542757 containerd[1440]: time="2024-12-13T01:29:52.542634042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:52.542844 containerd[1440]: time="2024-12-13T01:29:52.542745817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:52.542844 containerd[1440]: time="2024-12-13T01:29:52.542764113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.542915 containerd[1440]: time="2024-12-13T01:29:52.542886076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.543560 containerd[1440]: time="2024-12-13T01:29:52.543446710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:52.543560 containerd[1440]: time="2024-12-13T01:29:52.543503078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:52.543560 containerd[1440]: time="2024-12-13T01:29:52.543516101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.543787 containerd[1440]: time="2024-12-13T01:29:52.543597475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:52.546266 kubelet[2191]: W1213 01:29:52.546202 2191 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.546432 kubelet[2191]: E1213 01:29:52.546389 2191 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Dec 13 01:29:52.562593 systemd[1]: Started cri-containerd-1f10ff084fc877abb9be8132003fad355f9e9090ea9475dac75f7b3025ff827b.scope - libcontainer container 1f10ff084fc877abb9be8132003fad355f9e9090ea9475dac75f7b3025ff827b. Dec 13 01:29:52.568182 systemd[1]: Started cri-containerd-98cf34cd6690136c48d313eb596e52e7e27b4bfd07de74b93db85fef47224c2e.scope - libcontainer container 98cf34cd6690136c48d313eb596e52e7e27b4bfd07de74b93db85fef47224c2e. Dec 13 01:29:52.569681 systemd[1]: Started cri-containerd-ed5e56952e14a156fa3fa7d9354cc4bf8f7e27416fd26d8b06b00e2574b5d507.scope - libcontainer container ed5e56952e14a156fa3fa7d9354cc4bf8f7e27416fd26d8b06b00e2574b5d507. Dec 13 01:29:52.604562 containerd[1440]: time="2024-12-13T01:29:52.604439067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e4e8ff7115a5f15d40b054d456067ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed5e56952e14a156fa3fa7d9354cc4bf8f7e27416fd26d8b06b00e2574b5d507\"" Dec 13 01:29:52.606720 kubelet[2191]: E1213 01:29:52.606686 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:52.609255 containerd[1440]: time="2024-12-13T01:29:52.609218366Z" level=info msg="CreateContainer within sandbox \"ed5e56952e14a156fa3fa7d9354cc4bf8f7e27416fd26d8b06b00e2574b5d507\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:29:52.610204 containerd[1440]: time="2024-12-13T01:29:52.610159669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f10ff084fc877abb9be8132003fad355f9e9090ea9475dac75f7b3025ff827b\"" Dec 13 01:29:52.610897 kubelet[2191]: E1213 01:29:52.610876 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:52.612252 containerd[1440]: time="2024-12-13T01:29:52.612062408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"98cf34cd6690136c48d313eb596e52e7e27b4bfd07de74b93db85fef47224c2e\"" Dec 13 01:29:52.612640 kubelet[2191]: E1213 01:29:52.612622 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:52.614389 containerd[1440]: time="2024-12-13T01:29:52.614278821Z" level=info msg="CreateContainer within sandbox \"1f10ff084fc877abb9be8132003fad355f9e9090ea9475dac75f7b3025ff827b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:29:52.616267 containerd[1440]: time="2024-12-13T01:29:52.616238527Z" level=info msg="CreateContainer within sandbox \"98cf34cd6690136c48d313eb596e52e7e27b4bfd07de74b93db85fef47224c2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:29:52.625980 containerd[1440]: time="2024-12-13T01:29:52.625941937Z" level=info msg="CreateContainer within sandbox \"ed5e56952e14a156fa3fa7d9354cc4bf8f7e27416fd26d8b06b00e2574b5d507\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"33ad5be27da0bd68bcade0e6dd85aadd145834a65c1547d11596598c88b3d1a4\"" Dec 13 01:29:52.627505 containerd[1440]: time="2024-12-13T01:29:52.626633762Z" level=info msg="StartContainer for \"33ad5be27da0bd68bcade0e6dd85aadd145834a65c1547d11596598c88b3d1a4\"" Dec 13 01:29:52.630731 containerd[1440]: time="2024-12-13T01:29:52.630697786Z" level=info msg="CreateContainer within sandbox \"1f10ff084fc877abb9be8132003fad355f9e9090ea9475dac75f7b3025ff827b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43e828df265b70895f926701ffa597a04cb03a4cc9d09eef4dca7ae02f6a9401\"" Dec 13 01:29:52.631301 containerd[1440]: time="2024-12-13T01:29:52.631273601Z" level=info msg="StartContainer for \"43e828df265b70895f926701ffa597a04cb03a4cc9d09eef4dca7ae02f6a9401\"" Dec 13 01:29:52.635310 containerd[1440]: time="2024-12-13T01:29:52.635258967Z" level=info msg="CreateContainer within sandbox \"98cf34cd6690136c48d313eb596e52e7e27b4bfd07de74b93db85fef47224c2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b242a6894fe252a55d19fbc69122e5b131fea31a47daf308f1a237531cdbb020\"" Dec 13 01:29:52.635687 containerd[1440]: time="2024-12-13T01:29:52.635659489Z" level=info msg="StartContainer for \"b242a6894fe252a55d19fbc69122e5b131fea31a47daf308f1a237531cdbb020\"" Dec 13 01:29:52.655377 systemd[1]: Started cri-containerd-33ad5be27da0bd68bcade0e6dd85aadd145834a65c1547d11596598c88b3d1a4.scope - libcontainer container 33ad5be27da0bd68bcade0e6dd85aadd145834a65c1547d11596598c88b3d1a4. Dec 13 01:29:52.665639 systemd[1]: Started cri-containerd-43e828df265b70895f926701ffa597a04cb03a4cc9d09eef4dca7ae02f6a9401.scope - libcontainer container 43e828df265b70895f926701ffa597a04cb03a4cc9d09eef4dca7ae02f6a9401. Dec 13 01:29:52.667301 systemd[1]: Started cri-containerd-b242a6894fe252a55d19fbc69122e5b131fea31a47daf308f1a237531cdbb020.scope - libcontainer container b242a6894fe252a55d19fbc69122e5b131fea31a47daf308f1a237531cdbb020. Dec 13 01:29:52.668118 kubelet[2191]: E1213 01:29:52.667723 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="1.6s" Dec 13 01:29:52.694499 containerd[1440]: time="2024-12-13T01:29:52.694439507Z" level=info msg="StartContainer for \"33ad5be27da0bd68bcade0e6dd85aadd145834a65c1547d11596598c88b3d1a4\" returns successfully" Dec 13 01:29:52.713199 containerd[1440]: time="2024-12-13T01:29:52.712879219Z" level=info msg="StartContainer for \"43e828df265b70895f926701ffa597a04cb03a4cc9d09eef4dca7ae02f6a9401\" returns successfully" Dec 13 01:29:52.739751 containerd[1440]: time="2024-12-13T01:29:52.736742555Z" level=info msg="StartContainer for \"b242a6894fe252a55d19fbc69122e5b131fea31a47daf308f1a237531cdbb020\" returns successfully" Dec 13 01:29:52.772349 kubelet[2191]: I1213 01:29:52.772292 2191 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:52.775958 kubelet[2191]: E1213 01:29:52.775929 2191 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Dec 13 01:29:53.287648 kubelet[2191]: E1213 01:29:53.287616 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:53.293463 kubelet[2191]: E1213 01:29:53.293434 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:53.293566 kubelet[2191]: E1213 01:29:53.293493 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:54.273683 kubelet[2191]: E1213 01:29:54.273643 2191 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:29:54.274917 kubelet[2191]: E1213 01:29:54.274886 2191 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:29:54.293434 kubelet[2191]: E1213 01:29:54.293395 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:54.377450 kubelet[2191]: I1213 01:29:54.377398 2191 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:54.387288 kubelet[2191]: I1213 01:29:54.387257 2191 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:29:54.393167 kubelet[2191]: E1213 01:29:54.393141 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.493692 kubelet[2191]: E1213 01:29:54.493660 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.594255 kubelet[2191]: E1213 01:29:54.594148 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.695295 kubelet[2191]: E1213 01:29:54.695260 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.795987 kubelet[2191]: E1213 01:29:54.795939 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.896522 kubelet[2191]: E1213 01:29:54.896422 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:54.997055 kubelet[2191]: E1213 01:29:54.997021 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:55.051162 kubelet[2191]: E1213 01:29:55.051134 2191 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:55.097451 kubelet[2191]: E1213 01:29:55.097390 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:55.198148 kubelet[2191]: E1213 01:29:55.198010 2191 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:55.256393 kubelet[2191]: I1213 01:29:55.256352 2191 apiserver.go:52] "Watching apiserver" Dec 13 01:29:55.264773 kubelet[2191]: I1213 01:29:55.264750 2191 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:29:56.751659 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Dec 13 01:29:56.751678 systemd[1]: Reloading... Dec 13 01:29:56.829479 zram_generator::config[2511]: No configuration found. Dec 13 01:29:56.914432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:56.979173 systemd[1]: Reloading finished in 227 ms. Dec 13 01:29:57.013324 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:57.022383 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:57.023516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:57.023680 systemd[1]: kubelet.service: Consumed 1.441s CPU time, 117.9M memory peak, 0B memory swap peak. Dec 13 01:29:57.035982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:57.127117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:57.131061 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:57.172008 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:57.172008 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:57.172008 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:57.172008 kubelet[2550]: I1213 01:29:57.171589 2550 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:57.175771 kubelet[2550]: I1213 01:29:57.175710 2550 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:29:57.175771 kubelet[2550]: I1213 01:29:57.175741 2550 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:57.176175 kubelet[2550]: I1213 01:29:57.175895 2550 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:29:57.177369 kubelet[2550]: I1213 01:29:57.177343 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:29:57.179556 kubelet[2550]: I1213 01:29:57.179248 2550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:57.186862 kubelet[2550]: I1213 01:29:57.186842 2550 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:57.187055 kubelet[2550]: I1213 01:29:57.187042 2550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:57.187215 kubelet[2550]: I1213 01:29:57.187202 2550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:57.187295 kubelet[2550]: I1213 01:29:57.187218 2550 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:57.187295 kubelet[2550]: I1213 01:29:57.187227 2550 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:57.187295 kubelet[2550]: I1213 01:29:57.187256 2550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:57.187373 kubelet[2550]: I1213 01:29:57.187346 2550 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:29:57.187373 kubelet[2550]: I1213 01:29:57.187359 2550 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:57.187425 kubelet[2550]: I1213 01:29:57.187382 2550 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:57.187425 kubelet[2550]: I1213 01:29:57.187396 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:57.190433 kubelet[2550]: I1213 01:29:57.190238 2550 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:57.191099 kubelet[2550]: I1213 01:29:57.191077 2550 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:57.192430 kubelet[2550]: I1213 01:29:57.192359 2550 server.go:1256] "Started kubelet" Dec 13 01:29:57.192589 kubelet[2550]: I1213 01:29:57.192465 2550 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:57.193219 kubelet[2550]: I1213 01:29:57.193169 2550 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:29:57.193488 kubelet[2550]: I1213 01:29:57.193453 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:57.194122 kubelet[2550]: I1213 01:29:57.194105 2550 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:57.197823 kubelet[2550]: I1213 01:29:57.197705 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:57.199514 kubelet[2550]: E1213 01:29:57.199488 2550 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:29:57.200284 kubelet[2550]: I1213 01:29:57.199533 2550 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:57.200284 kubelet[2550]: I1213 01:29:57.199628 2550 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:29:57.200284 kubelet[2550]: I1213 01:29:57.199751 2550 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:29:57.205417 kubelet[2550]: E1213 01:29:57.203548 2550 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:57.205980 kubelet[2550]: I1213 01:29:57.205943 2550 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:57.206251 kubelet[2550]: I1213 01:29:57.206230 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:57.209254 kubelet[2550]: I1213 01:29:57.209237 2550 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:57.211845 kubelet[2550]: I1213 01:29:57.211795 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:57.217959 sudo[2574]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:29:57.218260 sudo[2574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:29:57.221437 kubelet[2550]: I1213 01:29:57.221175 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:57.221437 kubelet[2550]: I1213 01:29:57.221199 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:57.221437 kubelet[2550]: I1213 01:29:57.221215 2550 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:29:57.222898 kubelet[2550]: E1213 01:29:57.221258 2550 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:57.256080 kubelet[2550]: I1213 01:29:57.256020 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:57.256080 kubelet[2550]: I1213 01:29:57.256064 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:57.256080 kubelet[2550]: I1213 01:29:57.256085 2550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:57.256234 kubelet[2550]: I1213 01:29:57.256224 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:29:57.256258 kubelet[2550]: I1213 01:29:57.256243 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:29:57.256258 kubelet[2550]: I1213 01:29:57.256250 2550 policy_none.go:49] "None policy: Start" Dec 13 01:29:57.256727 kubelet[2550]: I1213 01:29:57.256708 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:57.256727 kubelet[2550]: I1213 01:29:57.256730 2550 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:57.256896 kubelet[2550]: I1213 01:29:57.256851 2550 state_mem.go:75] "Updated machine memory state" Dec 13 01:29:57.261992 kubelet[2550]: I1213 01:29:57.261969 2550 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:57.262186 kubelet[2550]: I1213 01:29:57.262168 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:57.303280 kubelet[2550]: I1213 01:29:57.303186 2550 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:29:57.312119 kubelet[2550]: I1213 01:29:57.312036 2550 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:29:57.312756 kubelet[2550]: I1213 01:29:57.312732 2550 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:29:57.324572 kubelet[2550]: I1213 01:29:57.323445 2550 topology_manager.go:215] "Topology Admit Handler" podUID="8e4e8ff7115a5f15d40b054d456067ed" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:29:57.324572 kubelet[2550]: I1213 01:29:57.323532 2550 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:29:57.324572 kubelet[2550]: I1213 01:29:57.323583 2550 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:29:57.501296 kubelet[2550]: I1213 01:29:57.501087 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:57.501296 kubelet[2550]: I1213 01:29:57.501173 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:57.501296 kubelet[2550]: I1213 01:29:57.501196 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:57.501296 kubelet[2550]: I1213 01:29:57.501238 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:57.501296 kubelet[2550]: I1213 01:29:57.501262 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:29:57.501716 kubelet[2550]: I1213 01:29:57.501572 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:57.501716 kubelet[2550]: I1213 01:29:57.501619 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:57.501716 kubelet[2550]: I1213 01:29:57.501639 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:29:57.501716 kubelet[2550]: I1213 01:29:57.501695 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e4e8ff7115a5f15d40b054d456067ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e4e8ff7115a5f15d40b054d456067ed\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:57.640915 kubelet[2550]: E1213 01:29:57.640828 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:57.640915 kubelet[2550]: E1213 01:29:57.640868 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:57.641173 kubelet[2550]: E1213 01:29:57.641142 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:57.694849 sudo[2574]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:58.188436 kubelet[2550]: I1213 01:29:58.188387 2550 apiserver.go:52] "Watching apiserver" Dec 13 01:29:58.200129 kubelet[2550]: I1213 01:29:58.200095 2550 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:29:58.236807 kubelet[2550]: E1213 01:29:58.236177 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:58.237722 kubelet[2550]: E1213 01:29:58.237704 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:58.243421 kubelet[2550]: E1213 01:29:58.243347 2550 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:29:58.245338 kubelet[2550]: E1213 01:29:58.245317 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:58.276312 kubelet[2550]: I1213 01:29:58.276204 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.276149675 podStartE2EDuration="1.276149675s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.276130551 +0000 UTC m=+1.139641579" watchObservedRunningTime="2024-12-13 01:29:58.276149675 +0000 UTC m=+1.139660743" Dec 13 01:29:58.276312 kubelet[2550]: I1213 01:29:58.276303 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.276285703 podStartE2EDuration="1.276285703s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.267814683 +0000 UTC m=+1.131325791" watchObservedRunningTime="2024-12-13 01:29:58.276285703 +0000 UTC m=+1.139796771" Dec 13 01:29:58.283446 kubelet[2550]: I1213 01:29:58.283401 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2833719129999999 podStartE2EDuration="1.283371913s" podCreationTimestamp="2024-12-13 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:58.282728377 +0000 UTC m=+1.146239486" watchObservedRunningTime="2024-12-13 01:29:58.283371913 +0000 UTC m=+1.146882981" Dec 13 01:29:59.237738 kubelet[2550]: E1213 01:29:59.237707 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:59.237738 kubelet[2550]: E1213 01:29:59.237773 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:29:59.396937 sudo[1620]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:59.399119 sshd[1617]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:59.402258 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:44088.service: Deactivated successfully. Dec 13 01:29:59.404006 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:29:59.404177 systemd[1]: session-7.scope: Consumed 7.108s CPU time, 185.7M memory peak, 0B memory swap peak. Dec 13 01:29:59.405502 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:29:59.406826 systemd-logind[1422]: Removed session 7. Dec 13 01:30:00.239805 kubelet[2550]: E1213 01:30:00.239755 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:07.449230 kubelet[2550]: E1213 01:30:07.448560 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:08.249712 kubelet[2550]: E1213 01:30:08.249682 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:09.119497 kubelet[2550]: E1213 01:30:09.119455 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:10.241741 kubelet[2550]: E1213 01:30:10.241368 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:10.507194 kubelet[2550]: I1213 01:30:10.507092 2550 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:30:10.508660 containerd[1440]: time="2024-12-13T01:30:10.508622663Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:30:10.509605 kubelet[2550]: I1213 01:30:10.509082 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:30:10.522843 kubelet[2550]: I1213 01:30:10.522034 2550 topology_manager.go:215] "Topology Admit Handler" podUID="275b4b52-73eb-406e-929b-b87efb590241" podNamespace="kube-system" podName="kube-proxy-f27zc" Dec 13 01:30:10.539721 systemd[1]: Created slice kubepods-besteffort-pod275b4b52_73eb_406e_929b_b87efb590241.slice - libcontainer container kubepods-besteffort-pod275b4b52_73eb_406e_929b_b87efb590241.slice. Dec 13 01:30:10.544168 kubelet[2550]: I1213 01:30:10.544122 2550 topology_manager.go:215] "Topology Admit Handler" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" podNamespace="kube-system" podName="cilium-2wjw9" Dec 13 01:30:10.558332 systemd[1]: Created slice kubepods-burstable-podeb6672be_00ec_4007_9f40_aeadb88f6836.slice - libcontainer container kubepods-burstable-podeb6672be_00ec_4007_9f40_aeadb88f6836.slice. Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596524 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-bpf-maps\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596576 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-cgroup\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596609 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-xtables-lock\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596631 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-config-path\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596654 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-net\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598354 kubelet[2550]: I1213 01:30:10.596685 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-kernel\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598622 kubelet[2550]: I1213 01:30:10.596707 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/275b4b52-73eb-406e-929b-b87efb590241-kube-proxy\") pod \"kube-proxy-f27zc\" (UID: \"275b4b52-73eb-406e-929b-b87efb590241\") " pod="kube-system/kube-proxy-f27zc" Dec 13 01:30:10.598622 kubelet[2550]: I1213 01:30:10.596756 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/275b4b52-73eb-406e-929b-b87efb590241-xtables-lock\") pod \"kube-proxy-f27zc\" (UID: \"275b4b52-73eb-406e-929b-b87efb590241\") " pod="kube-system/kube-proxy-f27zc" Dec 13 01:30:10.598622 kubelet[2550]: I1213 01:30:10.596807 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/275b4b52-73eb-406e-929b-b87efb590241-lib-modules\") pod \"kube-proxy-f27zc\" (UID: \"275b4b52-73eb-406e-929b-b87efb590241\") " pod="kube-system/kube-proxy-f27zc" Dec 13 01:30:10.598622 kubelet[2550]: I1213 01:30:10.596865 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g95f\" (UniqueName: \"kubernetes.io/projected/275b4b52-73eb-406e-929b-b87efb590241-kube-api-access-5g95f\") pod \"kube-proxy-f27zc\" (UID: \"275b4b52-73eb-406e-929b-b87efb590241\") " pod="kube-system/kube-proxy-f27zc" Dec 13 01:30:10.598622 kubelet[2550]: I1213 01:30:10.596899 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntwt\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.596949 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-hostproc\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.596981 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-etc-cni-netd\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.597004 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6672be-00ec-4007-9f40-aeadb88f6836-clustermesh-secrets\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.597022 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cni-path\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.597056 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-lib-modules\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598729 kubelet[2550]: I1213 01:30:10.597073 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-hubble-tls\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.598851 kubelet[2550]: I1213 01:30:10.597114 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-run\") pod \"cilium-2wjw9\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " pod="kube-system/cilium-2wjw9" Dec 13 01:30:10.651453 kubelet[2550]: I1213 01:30:10.651393 2550 topology_manager.go:215] "Topology Admit Handler" podUID="5006f994-9115-4f8a-b832-3f266baa2c01" podNamespace="kube-system" podName="cilium-operator-5cc964979-cqvs4" Dec 13 01:30:10.660944 systemd[1]: Created slice kubepods-besteffort-pod5006f994_9115_4f8a_b832_3f266baa2c01.slice - libcontainer container kubepods-besteffort-pod5006f994_9115_4f8a_b832_3f266baa2c01.slice. Dec 13 01:30:10.698074 kubelet[2550]: I1213 01:30:10.697846 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkk5\" (UniqueName: \"kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5\") pod \"cilium-operator-5cc964979-cqvs4\" (UID: \"5006f994-9115-4f8a-b832-3f266baa2c01\") " pod="kube-system/cilium-operator-5cc964979-cqvs4" Dec 13 01:30:10.698074 kubelet[2550]: I1213 01:30:10.698002 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5006f994-9115-4f8a-b832-3f266baa2c01-cilium-config-path\") pod \"cilium-operator-5cc964979-cqvs4\" (UID: \"5006f994-9115-4f8a-b832-3f266baa2c01\") " pod="kube-system/cilium-operator-5cc964979-cqvs4" Dec 13 01:30:10.710987 kubelet[2550]: E1213 01:30:10.710944 2550 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.711782 kubelet[2550]: E1213 01:30:10.711755 2550 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.711782 kubelet[2550]: E1213 01:30:10.711783 2550 projected.go:200] Error preparing data for projected volume kube-api-access-5g95f for pod kube-system/kube-proxy-f27zc: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.711875 kubelet[2550]: E1213 01:30:10.711844 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/275b4b52-73eb-406e-929b-b87efb590241-kube-api-access-5g95f podName:275b4b52-73eb-406e-929b-b87efb590241 nodeName:}" failed. No retries permitted until 2024-12-13 01:30:11.211826952 +0000 UTC m=+14.075338020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5g95f" (UniqueName: "kubernetes.io/projected/275b4b52-73eb-406e-929b-b87efb590241-kube-api-access-5g95f") pod "kube-proxy-f27zc" (UID: "275b4b52-73eb-406e-929b-b87efb590241") : configmap "kube-root-ca.crt" not found Dec 13 01:30:10.712476 kubelet[2550]: E1213 01:30:10.712417 2550 projected.go:200] Error preparing data for projected volume kube-api-access-5ntwt for pod kube-system/cilium-2wjw9: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.712643 kubelet[2550]: E1213 01:30:10.712625 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt podName:eb6672be-00ec-4007-9f40-aeadb88f6836 nodeName:}" failed. No retries permitted until 2024-12-13 01:30:11.212609117 +0000 UTC m=+14.076120185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5ntwt" (UniqueName: "kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt") pod "cilium-2wjw9" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836") : configmap "kube-root-ca.crt" not found Dec 13 01:30:10.811336 kubelet[2550]: E1213 01:30:10.811228 2550 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.811336 kubelet[2550]: E1213 01:30:10.811262 2550 projected.go:200] Error preparing data for projected volume kube-api-access-pxkk5 for pod kube-system/cilium-operator-5cc964979-cqvs4: configmap "kube-root-ca.crt" not found Dec 13 01:30:10.811336 kubelet[2550]: E1213 01:30:10.811318 2550 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5 podName:5006f994-9115-4f8a-b832-3f266baa2c01 nodeName:}" failed. No retries permitted until 2024-12-13 01:30:11.311300323 +0000 UTC m=+14.174811351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pxkk5" (UniqueName: "kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5") pod "cilium-operator-5cc964979-cqvs4" (UID: "5006f994-9115-4f8a-b832-3f266baa2c01") : configmap "kube-root-ca.crt" not found Dec 13 01:30:11.125495 update_engine[1425]: I20241213 01:30:11.125438 1425 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:11.164458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2637) Dec 13 01:30:11.190507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2639) Dec 13 01:30:11.218454 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2639) Dec 13 01:30:11.450778 kubelet[2550]: E1213 01:30:11.450554 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:11.454114 containerd[1440]: time="2024-12-13T01:30:11.454076700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f27zc,Uid:275b4b52-73eb-406e-929b-b87efb590241,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:11.464013 kubelet[2550]: E1213 01:30:11.463790 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:11.464517 containerd[1440]: time="2024-12-13T01:30:11.464481904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wjw9,Uid:eb6672be-00ec-4007-9f40-aeadb88f6836,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:11.476954 containerd[1440]: time="2024-12-13T01:30:11.476803106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:11.476954 containerd[1440]: time="2024-12-13T01:30:11.476890956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:11.477108 containerd[1440]: time="2024-12-13T01:30:11.476931320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.477296 containerd[1440]: time="2024-12-13T01:30:11.477245272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.482807 containerd[1440]: time="2024-12-13T01:30:11.482707401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:11.482807 containerd[1440]: time="2024-12-13T01:30:11.482771968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:11.484614 containerd[1440]: time="2024-12-13T01:30:11.482949186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.484614 containerd[1440]: time="2024-12-13T01:30:11.483528286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.494575 systemd[1]: Started cri-containerd-fdcd3776ecd67f78afec889083ef33aad867b27ff0248b5ddf8b75088dde5b35.scope - libcontainer container fdcd3776ecd67f78afec889083ef33aad867b27ff0248b5ddf8b75088dde5b35. Dec 13 01:30:11.497796 systemd[1]: Started cri-containerd-0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404.scope - libcontainer container 0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404. Dec 13 01:30:11.517984 containerd[1440]: time="2024-12-13T01:30:11.517887463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f27zc,Uid:275b4b52-73eb-406e-929b-b87efb590241,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdcd3776ecd67f78afec889083ef33aad867b27ff0248b5ddf8b75088dde5b35\"" Dec 13 01:30:11.521585 kubelet[2550]: E1213 01:30:11.521561 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:11.525646 containerd[1440]: time="2024-12-13T01:30:11.525535020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wjw9,Uid:eb6672be-00ec-4007-9f40-aeadb88f6836,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\"" Dec 13 01:30:11.526439 kubelet[2550]: E1213 01:30:11.526332 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:11.526896 containerd[1440]: time="2024-12-13T01:30:11.526591890Z" level=info msg="CreateContainer within sandbox \"fdcd3776ecd67f78afec889083ef33aad867b27ff0248b5ddf8b75088dde5b35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:30:11.528666 containerd[1440]: time="2024-12-13T01:30:11.528636302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:30:11.565635 kubelet[2550]: E1213 01:30:11.565601 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:11.566672 containerd[1440]: time="2024-12-13T01:30:11.566634498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cqvs4,Uid:5006f994-9115-4f8a-b832-3f266baa2c01,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:11.567318 containerd[1440]: time="2024-12-13T01:30:11.566903446Z" level=info msg="CreateContainer within sandbox \"fdcd3776ecd67f78afec889083ef33aad867b27ff0248b5ddf8b75088dde5b35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c122d693d109b5aa72fd730edefaccd8658aaedee364c64fe20db71b1e92435\"" Dec 13 01:30:11.567691 containerd[1440]: time="2024-12-13T01:30:11.567668646Z" level=info msg="StartContainer for \"5c122d693d109b5aa72fd730edefaccd8658aaedee364c64fe20db71b1e92435\"" Dec 13 01:30:11.589004 containerd[1440]: time="2024-12-13T01:30:11.588289873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:11.589004 containerd[1440]: time="2024-12-13T01:30:11.588833569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:11.589004 containerd[1440]: time="2024-12-13T01:30:11.588846931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.589004 containerd[1440]: time="2024-12-13T01:30:11.588984985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:11.600579 systemd[1]: Started cri-containerd-5c122d693d109b5aa72fd730edefaccd8658aaedee364c64fe20db71b1e92435.scope - libcontainer container 5c122d693d109b5aa72fd730edefaccd8658aaedee364c64fe20db71b1e92435. Dec 13 01:30:11.603317 systemd[1]: Started cri-containerd-de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183.scope - libcontainer container de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183. Dec 13 01:30:11.630261 containerd[1440]: time="2024-12-13T01:30:11.630134629Z" level=info msg="StartContainer for \"5c122d693d109b5aa72fd730edefaccd8658aaedee364c64fe20db71b1e92435\" returns successfully" Dec 13 01:30:11.636963 containerd[1440]: time="2024-12-13T01:30:11.636925536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cqvs4,Uid:5006f994-9115-4f8a-b832-3f266baa2c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\"" Dec 13 01:30:11.639156 kubelet[2550]: E1213 01:30:11.639133 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:12.256881 kubelet[2550]: E1213 01:30:12.256835 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:12.270334 kubelet[2550]: I1213 01:30:12.270072 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f27zc" podStartSLOduration=2.270035209 podStartE2EDuration="2.270035209s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:12.269918038 +0000 UTC m=+15.133429106" watchObservedRunningTime="2024-12-13 01:30:12.270035209 +0000 UTC m=+15.133546317" Dec 13 01:30:18.267303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440189590.mount: Deactivated successfully. Dec 13 01:30:19.653887 containerd[1440]: time="2024-12-13T01:30:19.653826579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:19.654329 containerd[1440]: time="2024-12-13T01:30:19.654284691Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651470" Dec 13 01:30:19.655074 containerd[1440]: time="2024-12-13T01:30:19.655036265Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:19.656713 containerd[1440]: time="2024-12-13T01:30:19.656678703Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.127919028s" Dec 13 01:30:19.657444 containerd[1440]: time="2024-12-13T01:30:19.656714545Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:30:19.663944 containerd[1440]: time="2024-12-13T01:30:19.663905220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:30:19.665585 containerd[1440]: time="2024-12-13T01:30:19.665363684Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:30:19.690558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406019737.mount: Deactivated successfully. Dec 13 01:30:19.692827 containerd[1440]: time="2024-12-13T01:30:19.692780086Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\"" Dec 13 01:30:19.693275 containerd[1440]: time="2024-12-13T01:30:19.693248879Z" level=info msg="StartContainer for \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\"" Dec 13 01:30:19.720610 systemd[1]: Started cri-containerd-f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3.scope - libcontainer container f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3. Dec 13 01:30:19.790848 systemd[1]: cri-containerd-f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3.scope: Deactivated successfully. Dec 13 01:30:19.810020 containerd[1440]: time="2024-12-13T01:30:19.809868462Z" level=info msg="StartContainer for \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\" returns successfully" Dec 13 01:30:19.984455 containerd[1440]: time="2024-12-13T01:30:19.984257059Z" level=info msg="shim disconnected" id=f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3 namespace=k8s.io Dec 13 01:30:19.984455 containerd[1440]: time="2024-12-13T01:30:19.984306542Z" level=warning msg="cleaning up after shim disconnected" id=f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3 namespace=k8s.io Dec 13 01:30:19.984455 containerd[1440]: time="2024-12-13T01:30:19.984317743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:20.292646 kubelet[2550]: E1213 01:30:20.292528 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:20.295027 containerd[1440]: time="2024-12-13T01:30:20.294994646Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:30:20.304237 containerd[1440]: time="2024-12-13T01:30:20.304196397Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\"" Dec 13 01:30:20.304729 containerd[1440]: time="2024-12-13T01:30:20.304698551Z" level=info msg="StartContainer for \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\"" Dec 13 01:30:20.331563 systemd[1]: Started cri-containerd-9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f.scope - libcontainer container 9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f. Dec 13 01:30:20.353613 containerd[1440]: time="2024-12-13T01:30:20.353565940Z" level=info msg="StartContainer for \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\" returns successfully" Dec 13 01:30:20.365883 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:30:20.366423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:20.366493 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:20.373644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:20.373880 systemd[1]: cri-containerd-9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f.scope: Deactivated successfully. Dec 13 01:30:20.395004 containerd[1440]: time="2024-12-13T01:30:20.394875971Z" level=info msg="shim disconnected" id=9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f namespace=k8s.io Dec 13 01:30:20.395004 containerd[1440]: time="2024-12-13T01:30:20.395001300Z" level=warning msg="cleaning up after shim disconnected" id=9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f namespace=k8s.io Dec 13 01:30:20.395161 containerd[1440]: time="2024-12-13T01:30:20.395017181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:20.403067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:20.686959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3-rootfs.mount: Deactivated successfully. Dec 13 01:30:20.765533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460785149.mount: Deactivated successfully. Dec 13 01:30:21.220437 containerd[1440]: time="2024-12-13T01:30:21.220296521Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.220821 containerd[1440]: time="2024-12-13T01:30:21.220786313Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138314" Dec 13 01:30:21.221528 containerd[1440]: time="2024-12-13T01:30:21.221474398Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.223802 containerd[1440]: time="2024-12-13T01:30:21.223766229Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.559772163s" Dec 13 01:30:21.223840 containerd[1440]: time="2024-12-13T01:30:21.223817672Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:30:21.225919 containerd[1440]: time="2024-12-13T01:30:21.225886928Z" level=info msg="CreateContainer within sandbox \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:30:21.240516 containerd[1440]: time="2024-12-13T01:30:21.240448925Z" level=info msg="CreateContainer within sandbox \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\"" Dec 13 01:30:21.241020 containerd[1440]: time="2024-12-13T01:30:21.240814589Z" level=info msg="StartContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\"" Dec 13 01:30:21.270566 systemd[1]: Started cri-containerd-44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318.scope - libcontainer container 44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318. Dec 13 01:30:21.288668 containerd[1440]: time="2024-12-13T01:30:21.288550766Z" level=info msg="StartContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" returns successfully" Dec 13 01:30:21.298594 kubelet[2550]: E1213 01:30:21.297955 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:21.301067 kubelet[2550]: E1213 01:30:21.300911 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:21.302662 containerd[1440]: time="2024-12-13T01:30:21.302631691Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:30:21.310446 kubelet[2550]: I1213 01:30:21.308975 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cqvs4" podStartSLOduration=1.7259912750000002 podStartE2EDuration="11.308931785s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="2024-12-13 01:30:11.641263708 +0000 UTC m=+14.504774736" lastFinishedPulling="2024-12-13 01:30:21.224204178 +0000 UTC m=+24.087715246" observedRunningTime="2024-12-13 01:30:21.308647166 +0000 UTC m=+24.172158234" watchObservedRunningTime="2024-12-13 01:30:21.308931785 +0000 UTC m=+24.172442853" Dec 13 01:30:21.321811 containerd[1440]: time="2024-12-13T01:30:21.321700224Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\"" Dec 13 01:30:21.322359 containerd[1440]: time="2024-12-13T01:30:21.322331625Z" level=info msg="StartContainer for \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\"" Dec 13 01:30:21.355564 systemd[1]: Started cri-containerd-6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6.scope - libcontainer container 6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6. Dec 13 01:30:21.382144 containerd[1440]: time="2024-12-13T01:30:21.382035068Z" level=info msg="StartContainer for \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\" returns successfully" Dec 13 01:30:21.396186 systemd[1]: cri-containerd-6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6.scope: Deactivated successfully. Dec 13 01:30:21.485825 containerd[1440]: time="2024-12-13T01:30:21.485647997Z" level=info msg="shim disconnected" id=6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6 namespace=k8s.io Dec 13 01:30:21.485825 containerd[1440]: time="2024-12-13T01:30:21.485719601Z" level=warning msg="cleaning up after shim disconnected" id=6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6 namespace=k8s.io Dec 13 01:30:21.486622 containerd[1440]: time="2024-12-13T01:30:21.485842249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:21.499686 containerd[1440]: time="2024-12-13T01:30:21.499638116Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:30:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:30:22.308521 kubelet[2550]: E1213 01:30:22.308494 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:22.309341 kubelet[2550]: E1213 01:30:22.308591 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:22.312690 containerd[1440]: time="2024-12-13T01:30:22.312422500Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:30:22.336235 containerd[1440]: time="2024-12-13T01:30:22.336186679Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\"" Dec 13 01:30:22.337326 containerd[1440]: time="2024-12-13T01:30:22.336638947Z" level=info msg="StartContainer for \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\"" Dec 13 01:30:22.365555 systemd[1]: Started cri-containerd-004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538.scope - libcontainer container 004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538. Dec 13 01:30:22.382522 systemd[1]: cri-containerd-004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538.scope: Deactivated successfully. Dec 13 01:30:22.385726 containerd[1440]: time="2024-12-13T01:30:22.385611915Z" level=info msg="StartContainer for \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\" returns successfully" Dec 13 01:30:22.402704 containerd[1440]: time="2024-12-13T01:30:22.402623308Z" level=info msg="shim disconnected" id=004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538 namespace=k8s.io Dec 13 01:30:22.402704 containerd[1440]: time="2024-12-13T01:30:22.402669671Z" level=warning msg="cleaning up after shim disconnected" id=004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538 namespace=k8s.io Dec 13 01:30:22.402704 containerd[1440]: time="2024-12-13T01:30:22.402678192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:22.688568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538-rootfs.mount: Deactivated successfully. Dec 13 01:30:23.315785 kubelet[2550]: E1213 01:30:23.315620 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:23.317932 containerd[1440]: time="2024-12-13T01:30:23.317894799Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:30:23.337872 containerd[1440]: time="2024-12-13T01:30:23.337834847Z" level=info msg="CreateContainer within sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\"" Dec 13 01:30:23.339629 containerd[1440]: time="2024-12-13T01:30:23.338972756Z" level=info msg="StartContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\"" Dec 13 01:30:23.362557 systemd[1]: Started cri-containerd-a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685.scope - libcontainer container a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685. Dec 13 01:30:23.385905 containerd[1440]: time="2024-12-13T01:30:23.385858077Z" level=info msg="StartContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" returns successfully" Dec 13 01:30:23.518453 kubelet[2550]: I1213 01:30:23.518375 2550 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:30:23.551534 kubelet[2550]: I1213 01:30:23.551491 2550 topology_manager.go:215] "Topology Admit Handler" podUID="906ac978-ab8b-46f4-afbe-cfd3b59c6a1a" podNamespace="kube-system" podName="coredns-76f75df574-7t4c9" Dec 13 01:30:23.551712 kubelet[2550]: I1213 01:30:23.551691 2550 topology_manager.go:215] "Topology Admit Handler" podUID="55310cc2-dc4d-4abb-9d0a-5f15356e6466" podNamespace="kube-system" podName="coredns-76f75df574-nwpv8" Dec 13 01:30:23.561712 systemd[1]: Created slice kubepods-burstable-pod906ac978_ab8b_46f4_afbe_cfd3b59c6a1a.slice - libcontainer container kubepods-burstable-pod906ac978_ab8b_46f4_afbe_cfd3b59c6a1a.slice. Dec 13 01:30:23.570390 systemd[1]: Created slice kubepods-burstable-pod55310cc2_dc4d_4abb_9d0a_5f15356e6466.slice - libcontainer container kubepods-burstable-pod55310cc2_dc4d_4abb_9d0a_5f15356e6466.slice. Dec 13 01:30:23.589356 kubelet[2550]: I1213 01:30:23.589324 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/906ac978-ab8b-46f4-afbe-cfd3b59c6a1a-config-volume\") pod \"coredns-76f75df574-7t4c9\" (UID: \"906ac978-ab8b-46f4-afbe-cfd3b59c6a1a\") " pod="kube-system/coredns-76f75df574-7t4c9" Dec 13 01:30:23.589593 kubelet[2550]: I1213 01:30:23.589374 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55310cc2-dc4d-4abb-9d0a-5f15356e6466-config-volume\") pod \"coredns-76f75df574-nwpv8\" (UID: \"55310cc2-dc4d-4abb-9d0a-5f15356e6466\") " pod="kube-system/coredns-76f75df574-nwpv8" Dec 13 01:30:23.589593 kubelet[2550]: I1213 01:30:23.589398 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6sq\" (UniqueName: \"kubernetes.io/projected/55310cc2-dc4d-4abb-9d0a-5f15356e6466-kube-api-access-tp6sq\") pod \"coredns-76f75df574-nwpv8\" (UID: \"55310cc2-dc4d-4abb-9d0a-5f15356e6466\") " pod="kube-system/coredns-76f75df574-nwpv8" Dec 13 01:30:23.589593 kubelet[2550]: I1213 01:30:23.589430 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npw88\" (UniqueName: \"kubernetes.io/projected/906ac978-ab8b-46f4-afbe-cfd3b59c6a1a-kube-api-access-npw88\") pod \"coredns-76f75df574-7t4c9\" (UID: \"906ac978-ab8b-46f4-afbe-cfd3b59c6a1a\") " pod="kube-system/coredns-76f75df574-7t4c9" Dec 13 01:30:23.865551 kubelet[2550]: E1213 01:30:23.865398 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:23.867620 containerd[1440]: time="2024-12-13T01:30:23.867575339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7t4c9,Uid:906ac978-ab8b-46f4-afbe-cfd3b59c6a1a,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:23.875226 kubelet[2550]: E1213 01:30:23.873004 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:23.875326 containerd[1440]: time="2024-12-13T01:30:23.874232223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nwpv8,Uid:55310cc2-dc4d-4abb-9d0a-5f15356e6466,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:24.321099 kubelet[2550]: E1213 01:30:24.321059 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:25.322974 kubelet[2550]: E1213 01:30:25.322904 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:25.632395 systemd-networkd[1382]: cilium_host: Link UP Dec 13 01:30:25.632529 systemd-networkd[1382]: cilium_net: Link UP Dec 13 01:30:25.632532 systemd-networkd[1382]: cilium_net: Gained carrier Dec 13 01:30:25.632665 systemd-networkd[1382]: cilium_host: Gained carrier Dec 13 01:30:25.711720 systemd-networkd[1382]: cilium_vxlan: Link UP Dec 13 01:30:25.711728 systemd-networkd[1382]: cilium_vxlan: Gained carrier Dec 13 01:30:25.891535 systemd-networkd[1382]: cilium_net: Gained IPv6LL Dec 13 01:30:26.007435 kernel: NET: Registered PF_ALG protocol family Dec 13 01:30:26.154973 systemd[1]: Started sshd@7-10.0.0.66:22-10.0.0.1:41416.service - OpenSSH per-connection server daemon (10.0.0.1:41416). Dec 13 01:30:26.190948 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 41416 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:26.192200 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:26.197500 systemd-logind[1422]: New session 8 of user core. Dec 13 01:30:26.206594 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:30:26.325459 kubelet[2550]: E1213 01:30:26.325317 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:26.344171 sshd[3511]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:26.347640 systemd[1]: sshd@7-10.0.0.66:22-10.0.0.1:41416.service: Deactivated successfully. Dec 13 01:30:26.349479 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:30:26.350036 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:30:26.350796 systemd-logind[1422]: Removed session 8. Dec 13 01:30:26.498753 systemd-networkd[1382]: cilium_host: Gained IPv6LL Dec 13 01:30:26.599289 systemd-networkd[1382]: lxc_health: Link UP Dec 13 01:30:26.608173 systemd-networkd[1382]: lxc_health: Gained carrier Dec 13 01:30:26.885369 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Dec 13 01:30:27.020518 systemd-networkd[1382]: lxc53b5b8334726: Link UP Dec 13 01:30:27.025465 kernel: eth0: renamed from tmp90982 Dec 13 01:30:27.036212 systemd-networkd[1382]: lxc574b16b1c44a: Link UP Dec 13 01:30:27.043653 kernel: eth0: renamed from tmpa91a8 Dec 13 01:30:27.051874 systemd-networkd[1382]: lxc53b5b8334726: Gained carrier Dec 13 01:30:27.052037 systemd-networkd[1382]: lxc574b16b1c44a: Gained carrier Dec 13 01:30:27.326648 kubelet[2550]: E1213 01:30:27.326608 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:27.488932 kubelet[2550]: I1213 01:30:27.488892 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2wjw9" podStartSLOduration=9.359261987 podStartE2EDuration="17.488855159s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="2024-12-13 01:30:11.527974514 +0000 UTC m=+14.391485582" lastFinishedPulling="2024-12-13 01:30:19.657567686 +0000 UTC m=+22.521078754" observedRunningTime="2024-12-13 01:30:24.34928479 +0000 UTC m=+27.212795858" watchObservedRunningTime="2024-12-13 01:30:27.488855159 +0000 UTC m=+30.352366227" Dec 13 01:30:28.162543 systemd-networkd[1382]: lxc_health: Gained IPv6LL Dec 13 01:30:28.329383 kubelet[2550]: E1213 01:30:28.329352 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:28.546776 systemd-networkd[1382]: lxc53b5b8334726: Gained IPv6LL Dec 13 01:30:28.547045 systemd-networkd[1382]: lxc574b16b1c44a: Gained IPv6LL Dec 13 01:30:29.330200 kubelet[2550]: E1213 01:30:29.330151 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:30.460750 containerd[1440]: time="2024-12-13T01:30:30.460600377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:30.460750 containerd[1440]: time="2024-12-13T01:30:30.460696381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:30.460750 containerd[1440]: time="2024-12-13T01:30:30.460717542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.461165 containerd[1440]: time="2024-12-13T01:30:30.460809706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.465732 containerd[1440]: time="2024-12-13T01:30:30.465267436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:30.465732 containerd[1440]: time="2024-12-13T01:30:30.465313318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:30.465732 containerd[1440]: time="2024-12-13T01:30:30.465323839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.465732 containerd[1440]: time="2024-12-13T01:30:30.465385682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.482201 systemd[1]: Started cri-containerd-a91a8506bf50eb40abfcf2d8092d401b94edf4533a377fcb6250679d2bc4f667.scope - libcontainer container a91a8506bf50eb40abfcf2d8092d401b94edf4533a377fcb6250679d2bc4f667. Dec 13 01:30:30.486677 systemd[1]: Started cri-containerd-909828845bd40b66c0a0c7bb510b7bc0c824c6f5e71aa14d7694c0023cacbd53.scope - libcontainer container 909828845bd40b66c0a0c7bb510b7bc0c824c6f5e71aa14d7694c0023cacbd53. Dec 13 01:30:30.492919 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:30:30.498664 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:30:30.513239 containerd[1440]: time="2024-12-13T01:30:30.512804033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nwpv8,Uid:55310cc2-dc4d-4abb-9d0a-5f15356e6466,Namespace:kube-system,Attempt:0,} returns sandbox id \"a91a8506bf50eb40abfcf2d8092d401b94edf4533a377fcb6250679d2bc4f667\"" Dec 13 01:30:30.514495 kubelet[2550]: E1213 01:30:30.514471 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:30.517999 containerd[1440]: time="2024-12-13T01:30:30.517963075Z" level=info msg="CreateContainer within sandbox \"a91a8506bf50eb40abfcf2d8092d401b94edf4533a377fcb6250679d2bc4f667\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:30.524046 containerd[1440]: time="2024-12-13T01:30:30.524004120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7t4c9,Uid:906ac978-ab8b-46f4-afbe-cfd3b59c6a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"909828845bd40b66c0a0c7bb510b7bc0c824c6f5e71aa14d7694c0023cacbd53\"" Dec 13 01:30:30.525081 kubelet[2550]: E1213 01:30:30.524773 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:30.527804 containerd[1440]: time="2024-12-13T01:30:30.527759656Z" level=info msg="CreateContainer within sandbox \"909828845bd40b66c0a0c7bb510b7bc0c824c6f5e71aa14d7694c0023cacbd53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:30.537742 containerd[1440]: time="2024-12-13T01:30:30.537699484Z" level=info msg="CreateContainer within sandbox \"a91a8506bf50eb40abfcf2d8092d401b94edf4533a377fcb6250679d2bc4f667\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c88dbd7dbf168a2abdb3dc3098623f42629d7c5d8f65a2b1ead1ccac5292cfe8\"" Dec 13 01:30:30.538533 containerd[1440]: time="2024-12-13T01:30:30.538430158Z" level=info msg="StartContainer for \"c88dbd7dbf168a2abdb3dc3098623f42629d7c5d8f65a2b1ead1ccac5292cfe8\"" Dec 13 01:30:30.543257 containerd[1440]: time="2024-12-13T01:30:30.543211863Z" level=info msg="CreateContainer within sandbox \"909828845bd40b66c0a0c7bb510b7bc0c824c6f5e71aa14d7694c0023cacbd53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bda5a61ca3916d7275c3674439dbdf3aafd8bbbe4ec99d2ca0d9af758ca2ee2c\"" Dec 13 01:30:30.543738 containerd[1440]: time="2024-12-13T01:30:30.543673685Z" level=info msg="StartContainer for \"bda5a61ca3916d7275c3674439dbdf3aafd8bbbe4ec99d2ca0d9af758ca2ee2c\"" Dec 13 01:30:30.567661 systemd[1]: Started cri-containerd-c88dbd7dbf168a2abdb3dc3098623f42629d7c5d8f65a2b1ead1ccac5292cfe8.scope - libcontainer container c88dbd7dbf168a2abdb3dc3098623f42629d7c5d8f65a2b1ead1ccac5292cfe8. Dec 13 01:30:30.569945 systemd[1]: Started cri-containerd-bda5a61ca3916d7275c3674439dbdf3aafd8bbbe4ec99d2ca0d9af758ca2ee2c.scope - libcontainer container bda5a61ca3916d7275c3674439dbdf3aafd8bbbe4ec99d2ca0d9af758ca2ee2c. Dec 13 01:30:30.592610 containerd[1440]: time="2024-12-13T01:30:30.592439780Z" level=info msg="StartContainer for \"c88dbd7dbf168a2abdb3dc3098623f42629d7c5d8f65a2b1ead1ccac5292cfe8\" returns successfully" Dec 13 01:30:30.600952 containerd[1440]: time="2024-12-13T01:30:30.600809533Z" level=info msg="StartContainer for \"bda5a61ca3916d7275c3674439dbdf3aafd8bbbe4ec99d2ca0d9af758ca2ee2c\" returns successfully" Dec 13 01:30:31.339136 kubelet[2550]: E1213 01:30:31.339091 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:31.344847 kubelet[2550]: E1213 01:30:31.344808 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:31.350157 kubelet[2550]: I1213 01:30:31.350106 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nwpv8" podStartSLOduration=21.350071391 podStartE2EDuration="21.350071391s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:31.349213912 +0000 UTC m=+34.212725020" watchObservedRunningTime="2024-12-13 01:30:31.350071391 +0000 UTC m=+34.213582459" Dec 13 01:30:31.358397 systemd[1]: Started sshd@8-10.0.0.66:22-10.0.0.1:41428.service - OpenSSH per-connection server daemon (10.0.0.1:41428). Dec 13 01:30:31.361187 kubelet[2550]: I1213 01:30:31.361127 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7t4c9" podStartSLOduration=21.361088253 podStartE2EDuration="21.361088253s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:31.360865403 +0000 UTC m=+34.224376431" watchObservedRunningTime="2024-12-13 01:30:31.361088253 +0000 UTC m=+34.224599321" Dec 13 01:30:31.403948 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 41428 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:31.405371 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:31.409762 systemd-logind[1422]: New session 9 of user core. Dec 13 01:30:31.418624 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:30:31.527731 sshd[3960]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:31.531167 systemd[1]: sshd@8-10.0.0.66:22-10.0.0.1:41428.service: Deactivated successfully. Dec 13 01:30:31.532754 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:30:31.533524 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:30:31.534308 systemd-logind[1422]: Removed session 9. Dec 13 01:30:32.346705 kubelet[2550]: E1213 01:30:32.346656 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:32.346705 kubelet[2550]: E1213 01:30:32.346687 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:33.346301 kubelet[2550]: E1213 01:30:33.346258 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:33.346974 kubelet[2550]: E1213 01:30:33.346950 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:36.544139 systemd[1]: Started sshd@9-10.0.0.66:22-10.0.0.1:60768.service - OpenSSH per-connection server daemon (10.0.0.1:60768). Dec 13 01:30:36.579280 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 60768 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:36.580764 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:36.584490 systemd-logind[1422]: New session 10 of user core. Dec 13 01:30:36.595572 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:30:36.705543 sshd[3982]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:36.708936 systemd[1]: sshd@9-10.0.0.66:22-10.0.0.1:60768.service: Deactivated successfully. Dec 13 01:30:36.710869 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:30:36.711558 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:30:36.712447 systemd-logind[1422]: Removed session 10. Dec 13 01:30:41.717012 systemd[1]: Started sshd@10-10.0.0.66:22-10.0.0.1:60778.service - OpenSSH per-connection server daemon (10.0.0.1:60778). Dec 13 01:30:41.750744 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 60778 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:41.752018 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:41.755898 systemd-logind[1422]: New session 11 of user core. Dec 13 01:30:41.765599 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:30:41.875084 sshd[3999]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:41.878209 systemd[1]: sshd@10-10.0.0.66:22-10.0.0.1:60778.service: Deactivated successfully. Dec 13 01:30:41.879799 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:30:41.880366 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:30:41.881369 systemd-logind[1422]: Removed session 11. Dec 13 01:30:46.886001 systemd[1]: Started sshd@11-10.0.0.66:22-10.0.0.1:51302.service - OpenSSH per-connection server daemon (10.0.0.1:51302). Dec 13 01:30:46.919378 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:46.920654 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:46.924453 systemd-logind[1422]: New session 12 of user core. Dec 13 01:30:46.930555 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:30:47.034551 sshd[4016]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:47.038147 systemd[1]: sshd@11-10.0.0.66:22-10.0.0.1:51302.service: Deactivated successfully. Dec 13 01:30:47.040713 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:30:47.041366 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:30:47.042429 systemd-logind[1422]: Removed session 12. Dec 13 01:30:52.047027 systemd[1]: Started sshd@12-10.0.0.66:22-10.0.0.1:51314.service - OpenSSH per-connection server daemon (10.0.0.1:51314). Dec 13 01:30:52.080064 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 51314 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:52.081193 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:52.084551 systemd-logind[1422]: New session 13 of user core. Dec 13 01:30:52.091556 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:30:52.194834 sshd[4032]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:52.207815 systemd[1]: sshd@12-10.0.0.66:22-10.0.0.1:51314.service: Deactivated successfully. Dec 13 01:30:52.209294 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:30:52.210514 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:30:52.220642 systemd[1]: Started sshd@13-10.0.0.66:22-10.0.0.1:51324.service - OpenSSH per-connection server daemon (10.0.0.1:51324). Dec 13 01:30:52.221917 systemd-logind[1422]: Removed session 13. Dec 13 01:30:52.251723 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 51324 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:52.252916 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:52.256397 systemd-logind[1422]: New session 14 of user core. Dec 13 01:30:52.267543 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:30:52.404183 sshd[4047]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:52.413152 systemd[1]: sshd@13-10.0.0.66:22-10.0.0.1:51324.service: Deactivated successfully. Dec 13 01:30:52.417800 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:30:52.422037 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:30:52.430820 systemd[1]: Started sshd@14-10.0.0.66:22-10.0.0.1:51326.service - OpenSSH per-connection server daemon (10.0.0.1:51326). Dec 13 01:30:52.431975 systemd-logind[1422]: Removed session 14. Dec 13 01:30:52.462929 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:52.464121 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:52.468428 systemd-logind[1422]: New session 15 of user core. Dec 13 01:30:52.478562 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:30:52.587392 sshd[4059]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:52.590533 systemd[1]: sshd@14-10.0.0.66:22-10.0.0.1:51326.service: Deactivated successfully. Dec 13 01:30:52.593214 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:30:52.594296 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:30:52.595124 systemd-logind[1422]: Removed session 15. Dec 13 01:30:57.597955 systemd[1]: Started sshd@15-10.0.0.66:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Dec 13 01:30:57.631299 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:57.632479 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:57.635705 systemd-logind[1422]: New session 16 of user core. Dec 13 01:30:57.641581 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:30:57.746601 sshd[4076]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:57.758844 systemd[1]: sshd@15-10.0.0.66:22-10.0.0.1:40630.service: Deactivated successfully. Dec 13 01:30:57.760375 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:30:57.762449 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:30:57.774776 systemd[1]: Started sshd@16-10.0.0.66:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Dec 13 01:30:57.776201 systemd-logind[1422]: Removed session 16. Dec 13 01:30:57.804273 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:57.805555 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:57.809243 systemd-logind[1422]: New session 17 of user core. Dec 13 01:30:57.816561 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:30:58.036134 sshd[4090]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:58.048905 systemd[1]: sshd@16-10.0.0.66:22-10.0.0.1:40640.service: Deactivated successfully. Dec 13 01:30:58.050672 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:30:58.052025 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:30:58.053177 systemd[1]: Started sshd@17-10.0.0.66:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Dec 13 01:30:58.054163 systemd-logind[1422]: Removed session 17. Dec 13 01:30:58.091824 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:58.093090 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:58.097112 systemd-logind[1422]: New session 18 of user core. Dec 13 01:30:58.108607 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:30:59.293114 sshd[4102]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:59.310110 systemd[1]: sshd@17-10.0.0.66:22-10.0.0.1:40656.service: Deactivated successfully. Dec 13 01:30:59.313773 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:30:59.316681 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:30:59.321714 systemd[1]: Started sshd@18-10.0.0.66:22-10.0.0.1:40668.service - OpenSSH per-connection server daemon (10.0.0.1:40668). Dec 13 01:30:59.323109 systemd-logind[1422]: Removed session 18. Dec 13 01:30:59.353665 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 40668 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:59.355188 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:59.359080 systemd-logind[1422]: New session 19 of user core. Dec 13 01:30:59.370601 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:30:59.580533 sshd[4125]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:59.591881 systemd[1]: sshd@18-10.0.0.66:22-10.0.0.1:40668.service: Deactivated successfully. Dec 13 01:30:59.594259 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:30:59.596449 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:30:59.605733 systemd[1]: Started sshd@19-10.0.0.66:22-10.0.0.1:40674.service - OpenSSH per-connection server daemon (10.0.0.1:40674). Dec 13 01:30:59.607858 systemd-logind[1422]: Removed session 19. Dec 13 01:30:59.636477 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 40674 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:59.637904 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:59.642465 systemd-logind[1422]: New session 20 of user core. Dec 13 01:30:59.652580 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:30:59.764319 sshd[4137]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:59.766872 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:30:59.768271 systemd[1]: sshd@19-10.0.0.66:22-10.0.0.1:40674.service: Deactivated successfully. Dec 13 01:30:59.771063 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:30:59.771911 systemd-logind[1422]: Removed session 20. Dec 13 01:31:04.776121 systemd[1]: Started sshd@20-10.0.0.66:22-10.0.0.1:38490.service - OpenSSH per-connection server daemon (10.0.0.1:38490). Dec 13 01:31:04.809588 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 38490 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:04.810866 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:04.815161 systemd-logind[1422]: New session 21 of user core. Dec 13 01:31:04.823596 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:04.927646 sshd[4155]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:04.930999 systemd[1]: sshd@20-10.0.0.66:22-10.0.0.1:38490.service: Deactivated successfully. Dec 13 01:31:04.932691 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:04.933270 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:04.934361 systemd-logind[1422]: Removed session 21. Dec 13 01:31:09.943066 systemd[1]: Started sshd@21-10.0.0.66:22-10.0.0.1:38500.service - OpenSSH per-connection server daemon (10.0.0.1:38500). Dec 13 01:31:09.977199 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 38500 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:09.978536 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:09.983095 systemd-logind[1422]: New session 22 of user core. Dec 13 01:31:09.992585 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:10.095471 sshd[4170]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:10.098661 systemd[1]: sshd@21-10.0.0.66:22-10.0.0.1:38500.service: Deactivated successfully. Dec 13 01:31:10.100224 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:10.100797 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:10.101564 systemd-logind[1422]: Removed session 22. Dec 13 01:31:11.222555 kubelet[2550]: E1213 01:31:11.222527 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:15.109799 systemd[1]: Started sshd@22-10.0.0.66:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Dec 13 01:31:15.143087 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:15.144217 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:15.148145 systemd-logind[1422]: New session 23 of user core. Dec 13 01:31:15.157553 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:15.262133 sshd[4186]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:15.274787 systemd[1]: sshd@22-10.0.0.66:22-10.0.0.1:45974.service: Deactivated successfully. Dec 13 01:31:15.276239 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:15.277520 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:15.278821 systemd[1]: Started sshd@23-10.0.0.66:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). Dec 13 01:31:15.280066 systemd-logind[1422]: Removed session 23. Dec 13 01:31:15.312006 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:15.313163 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:15.317182 systemd-logind[1422]: New session 24 of user core. Dec 13 01:31:15.328555 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:31:17.343231 containerd[1440]: time="2024-12-13T01:31:17.343192715Z" level=info msg="StopContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" with timeout 30 (s)" Dec 13 01:31:17.347649 containerd[1440]: time="2024-12-13T01:31:17.343551516Z" level=info msg="Stop container \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" with signal terminated" Dec 13 01:31:17.352770 systemd[1]: cri-containerd-44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318.scope: Deactivated successfully. Dec 13 01:31:17.361588 containerd[1440]: time="2024-12-13T01:31:17.361557915Z" level=info msg="StopContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" with timeout 2 (s)" Dec 13 01:31:17.362000 containerd[1440]: time="2024-12-13T01:31:17.361971276Z" level=info msg="Stop container \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" with signal terminated" Dec 13 01:31:17.363347 containerd[1440]: time="2024-12-13T01:31:17.363282439Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:17.368500 systemd-networkd[1382]: lxc_health: Link DOWN Dec 13 01:31:17.368506 systemd-networkd[1382]: lxc_health: Lost carrier Dec 13 01:31:17.372963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318-rootfs.mount: Deactivated successfully. Dec 13 01:31:17.383123 containerd[1440]: time="2024-12-13T01:31:17.382931722Z" level=info msg="shim disconnected" id=44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318 namespace=k8s.io Dec 13 01:31:17.383479 containerd[1440]: time="2024-12-13T01:31:17.383306443Z" level=warning msg="cleaning up after shim disconnected" id=44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318 namespace=k8s.io Dec 13 01:31:17.383479 containerd[1440]: time="2024-12-13T01:31:17.383328403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:17.386571 systemd[1]: cri-containerd-a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685.scope: Deactivated successfully. Dec 13 01:31:17.386830 systemd[1]: cri-containerd-a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685.scope: Consumed 6.330s CPU time. Dec 13 01:31:17.405923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685-rootfs.mount: Deactivated successfully. Dec 13 01:31:17.411196 containerd[1440]: time="2024-12-13T01:31:17.411129144Z" level=info msg="shim disconnected" id=a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685 namespace=k8s.io Dec 13 01:31:17.411196 containerd[1440]: time="2024-12-13T01:31:17.411184104Z" level=warning msg="cleaning up after shim disconnected" id=a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685 namespace=k8s.io Dec 13 01:31:17.411196 containerd[1440]: time="2024-12-13T01:31:17.411192544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:17.426767 containerd[1440]: time="2024-12-13T01:31:17.426723778Z" level=info msg="StopContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" returns successfully" Dec 13 01:31:17.427665 containerd[1440]: time="2024-12-13T01:31:17.427630300Z" level=info msg="StopPodSandbox for \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\"" Dec 13 01:31:17.427729 containerd[1440]: time="2024-12-13T01:31:17.427676581Z" level=info msg="Container to stop \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.429751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183-shm.mount: Deactivated successfully. Dec 13 01:31:17.431162 containerd[1440]: time="2024-12-13T01:31:17.431132868Z" level=info msg="StopContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" returns successfully" Dec 13 01:31:17.431580 containerd[1440]: time="2024-12-13T01:31:17.431556789Z" level=info msg="StopPodSandbox for \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\"" Dec 13 01:31:17.431618 containerd[1440]: time="2024-12-13T01:31:17.431596469Z" level=info msg="Container to stop \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.431618 containerd[1440]: time="2024-12-13T01:31:17.431609909Z" level=info msg="Container to stop \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.431662 containerd[1440]: time="2024-12-13T01:31:17.431619269Z" level=info msg="Container to stop \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.431662 containerd[1440]: time="2024-12-13T01:31:17.431628589Z" level=info msg="Container to stop \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.431662 containerd[1440]: time="2024-12-13T01:31:17.431637869Z" level=info msg="Container to stop \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:31:17.433621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404-shm.mount: Deactivated successfully. Dec 13 01:31:17.434550 systemd[1]: cri-containerd-de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183.scope: Deactivated successfully. Dec 13 01:31:17.437584 systemd[1]: cri-containerd-0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404.scope: Deactivated successfully. Dec 13 01:31:17.470179 containerd[1440]: time="2024-12-13T01:31:17.469998474Z" level=info msg="shim disconnected" id=0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404 namespace=k8s.io Dec 13 01:31:17.470179 containerd[1440]: time="2024-12-13T01:31:17.470050914Z" level=warning msg="cleaning up after shim disconnected" id=0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404 namespace=k8s.io Dec 13 01:31:17.470179 containerd[1440]: time="2024-12-13T01:31:17.470059434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:17.477365 containerd[1440]: time="2024-12-13T01:31:17.477285290Z" level=info msg="shim disconnected" id=de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183 namespace=k8s.io Dec 13 01:31:17.477365 containerd[1440]: time="2024-12-13T01:31:17.477353850Z" level=warning msg="cleaning up after shim disconnected" id=de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183 namespace=k8s.io Dec 13 01:31:17.477365 containerd[1440]: time="2024-12-13T01:31:17.477363050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:17.489003 containerd[1440]: time="2024-12-13T01:31:17.486961271Z" level=info msg="TearDown network for sandbox \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" successfully" Dec 13 01:31:17.489003 containerd[1440]: time="2024-12-13T01:31:17.487007271Z" level=info msg="StopPodSandbox for \"0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404\" returns successfully" Dec 13 01:31:17.494140 containerd[1440]: time="2024-12-13T01:31:17.494108926Z" level=info msg="TearDown network for sandbox \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\" successfully" Dec 13 01:31:17.494140 containerd[1440]: time="2024-12-13T01:31:17.494137087Z" level=info msg="StopPodSandbox for \"de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183\" returns successfully" Dec 13 01:31:17.685971 kubelet[2550]: I1213 01:31:17.685852 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-cgroup\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.685971 kubelet[2550]: I1213 01:31:17.685901 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6672be-00ec-4007-9f40-aeadb88f6836-clustermesh-secrets\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.685971 kubelet[2550]: I1213 01:31:17.685927 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkk5\" (UniqueName: \"kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5\") pod \"5006f994-9115-4f8a-b832-3f266baa2c01\" (UID: \"5006f994-9115-4f8a-b832-3f266baa2c01\") " Dec 13 01:31:17.685971 kubelet[2550]: I1213 01:31:17.685949 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-config-path\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.685968 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-kernel\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.686010 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-run\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.686032 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ntwt\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.686050 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-etc-cni-netd\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.686066 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-lib-modules\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687503 kubelet[2550]: I1213 01:31:17.686085 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-net\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686103 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-hostproc\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686121 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-xtables-lock\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686140 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5006f994-9115-4f8a-b832-3f266baa2c01-cilium-config-path\") pod \"5006f994-9115-4f8a-b832-3f266baa2c01\" (UID: \"5006f994-9115-4f8a-b832-3f266baa2c01\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686158 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cni-path\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686176 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-bpf-maps\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.687868 kubelet[2550]: I1213 01:31:17.686197 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-hubble-tls\") pod \"eb6672be-00ec-4007-9f40-aeadb88f6836\" (UID: \"eb6672be-00ec-4007-9f40-aeadb88f6836\") " Dec 13 01:31:17.689622 kubelet[2550]: I1213 01:31:17.689580 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690652 kubelet[2550]: I1213 01:31:17.689765 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690652 kubelet[2550]: I1213 01:31:17.689835 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690652 kubelet[2550]: I1213 01:31:17.689856 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690652 kubelet[2550]: I1213 01:31:17.689873 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690652 kubelet[2550]: I1213 01:31:17.689891 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690815 kubelet[2550]: I1213 01:31:17.690678 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.690815 kubelet[2550]: I1213 01:31:17.690729 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.697362 kubelet[2550]: I1213 01:31:17.697192 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt" (OuterVolumeSpecName: "kube-api-access-5ntwt") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "kube-api-access-5ntwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:17.697362 kubelet[2550]: I1213 01:31:17.697233 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6672be-00ec-4007-9f40-aeadb88f6836-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:31:17.697362 kubelet[2550]: I1213 01:31:17.697266 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:17.697362 kubelet[2550]: I1213 01:31:17.697281 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.697362 kubelet[2550]: I1213 01:31:17.697313 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:31:17.697906 kubelet[2550]: I1213 01:31:17.697870 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5" (OuterVolumeSpecName: "kube-api-access-pxkk5") pod "5006f994-9115-4f8a-b832-3f266baa2c01" (UID: "5006f994-9115-4f8a-b832-3f266baa2c01"). InnerVolumeSpecName "kube-api-access-pxkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:31:17.699426 kubelet[2550]: I1213 01:31:17.699385 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb6672be-00ec-4007-9f40-aeadb88f6836" (UID: "eb6672be-00ec-4007-9f40-aeadb88f6836"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:31:17.699488 kubelet[2550]: I1213 01:31:17.699432 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5006f994-9115-4f8a-b832-3f266baa2c01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5006f994-9115-4f8a-b832-3f266baa2c01" (UID: "5006f994-9115-4f8a-b832-3f266baa2c01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:31:17.786836 kubelet[2550]: I1213 01:31:17.786803 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786836 kubelet[2550]: I1213 01:31:17.786833 2550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5ntwt\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-kube-api-access-5ntwt\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786836 kubelet[2550]: I1213 01:31:17.786845 2550 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786857 2550 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786867 2550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786875 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786884 2550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786897 2550 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786907 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5006f994-9115-4f8a-b832-3f266baa2c01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786916 2550 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.786991 kubelet[2550]: I1213 01:31:17.786925 2550 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.787179 kubelet[2550]: I1213 01:31:17.786934 2550 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.787179 kubelet[2550]: I1213 01:31:17.786942 2550 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6672be-00ec-4007-9f40-aeadb88f6836-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.787179 kubelet[2550]: I1213 01:31:17.786953 2550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pxkk5\" (UniqueName: \"kubernetes.io/projected/5006f994-9115-4f8a-b832-3f266baa2c01-kube-api-access-pxkk5\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.787179 kubelet[2550]: I1213 01:31:17.786962 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6672be-00ec-4007-9f40-aeadb88f6836-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:17.787179 kubelet[2550]: I1213 01:31:17.786971 2550 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6672be-00ec-4007-9f40-aeadb88f6836-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:31:18.222753 kubelet[2550]: E1213 01:31:18.222345 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:18.338865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de43f047690bc1f3732ae3f9a2c9d557767759cdd4d8a45a3753d492e9ed2183-rootfs.mount: Deactivated successfully. Dec 13 01:31:18.338964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ae9cbe59e63e4eda7ca9abbd0d27ee1a7308f9630d2f9b1a630a9277bc98404-rootfs.mount: Deactivated successfully. Dec 13 01:31:18.339012 systemd[1]: var-lib-kubelet-pods-5006f994\x2d9115\x2d4f8a\x2db832\x2d3f266baa2c01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxkk5.mount: Deactivated successfully. Dec 13 01:31:18.339060 systemd[1]: var-lib-kubelet-pods-eb6672be\x2d00ec\x2d4007\x2d9f40\x2daeadb88f6836-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5ntwt.mount: Deactivated successfully. Dec 13 01:31:18.339126 systemd[1]: var-lib-kubelet-pods-eb6672be\x2d00ec\x2d4007\x2d9f40\x2daeadb88f6836-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:31:18.339176 systemd[1]: var-lib-kubelet-pods-eb6672be\x2d00ec\x2d4007\x2d9f40\x2daeadb88f6836-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:31:18.432552 systemd[1]: Removed slice kubepods-burstable-podeb6672be_00ec_4007_9f40_aeadb88f6836.slice - libcontainer container kubepods-burstable-podeb6672be_00ec_4007_9f40_aeadb88f6836.slice. Dec 13 01:31:18.432954 systemd[1]: kubepods-burstable-podeb6672be_00ec_4007_9f40_aeadb88f6836.slice: Consumed 6.450s CPU time. Dec 13 01:31:18.434210 kubelet[2550]: I1213 01:31:18.434178 2550 scope.go:117] "RemoveContainer" containerID="a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685" Dec 13 01:31:18.434823 systemd[1]: Removed slice kubepods-besteffort-pod5006f994_9115_4f8a_b832_3f266baa2c01.slice - libcontainer container kubepods-besteffort-pod5006f994_9115_4f8a_b832_3f266baa2c01.slice. Dec 13 01:31:18.436460 containerd[1440]: time="2024-12-13T01:31:18.436270835Z" level=info msg="RemoveContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\"" Dec 13 01:31:18.443224 containerd[1440]: time="2024-12-13T01:31:18.443178854Z" level=info msg="RemoveContainer for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" returns successfully" Dec 13 01:31:18.443506 kubelet[2550]: I1213 01:31:18.443475 2550 scope.go:117] "RemoveContainer" containerID="004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538" Dec 13 01:31:18.444520 containerd[1440]: time="2024-12-13T01:31:18.444481658Z" level=info msg="RemoveContainer for \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\"" Dec 13 01:31:18.451838 containerd[1440]: time="2024-12-13T01:31:18.451434357Z" level=info msg="RemoveContainer for \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\" returns successfully" Dec 13 01:31:18.451932 kubelet[2550]: I1213 01:31:18.451585 2550 scope.go:117] "RemoveContainer" containerID="6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6" Dec 13 01:31:18.453000 containerd[1440]: time="2024-12-13T01:31:18.452969761Z" level=info msg="RemoveContainer for \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\"" Dec 13 01:31:18.456696 containerd[1440]: time="2024-12-13T01:31:18.456634611Z" level=info msg="RemoveContainer for \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\" returns successfully" Dec 13 01:31:18.456918 kubelet[2550]: I1213 01:31:18.456880 2550 scope.go:117] "RemoveContainer" containerID="9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f" Dec 13 01:31:18.458217 containerd[1440]: time="2024-12-13T01:31:18.458152896Z" level=info msg="RemoveContainer for \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\"" Dec 13 01:31:18.461890 containerd[1440]: time="2024-12-13T01:31:18.461855866Z" level=info msg="RemoveContainer for \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\" returns successfully" Dec 13 01:31:18.462115 kubelet[2550]: I1213 01:31:18.462027 2550 scope.go:117] "RemoveContainer" containerID="f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3" Dec 13 01:31:18.464179 containerd[1440]: time="2024-12-13T01:31:18.464105632Z" level=info msg="RemoveContainer for \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\"" Dec 13 01:31:18.468112 containerd[1440]: time="2024-12-13T01:31:18.468068043Z" level=info msg="RemoveContainer for \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\" returns successfully" Dec 13 01:31:18.468309 kubelet[2550]: I1213 01:31:18.468261 2550 scope.go:117] "RemoveContainer" containerID="a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685" Dec 13 01:31:18.468540 containerd[1440]: time="2024-12-13T01:31:18.468486764Z" level=error msg="ContainerStatus for \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\": not found" Dec 13 01:31:18.470926 kubelet[2550]: E1213 01:31:18.470873 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\": not found" containerID="a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685" Dec 13 01:31:18.474197 kubelet[2550]: I1213 01:31:18.474086 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685"} err="failed to get container status \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\": rpc error: code = NotFound desc = an error occurred when try to find container \"a461d740d90ab8a1bcef0ad2d7cfc0c37fbec00c202f2c8638b9cbcfed1d1685\": not found" Dec 13 01:31:18.474197 kubelet[2550]: I1213 01:31:18.474118 2550 scope.go:117] "RemoveContainer" containerID="004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538" Dec 13 01:31:18.474526 kubelet[2550]: E1213 01:31:18.474492 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\": not found" containerID="004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538" Dec 13 01:31:18.474555 containerd[1440]: time="2024-12-13T01:31:18.474309540Z" level=error msg="ContainerStatus for \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\": not found" Dec 13 01:31:18.475061 kubelet[2550]: I1213 01:31:18.474666 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538"} err="failed to get container status \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\": rpc error: code = NotFound desc = an error occurred when try to find container \"004ecf03a71827807afd56fa240cb8fda346ba291c791bea2f7ea550f7171538\": not found" Dec 13 01:31:18.475061 kubelet[2550]: I1213 01:31:18.474690 2550 scope.go:117] "RemoveContainer" containerID="6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6" Dec 13 01:31:18.477277 containerd[1440]: time="2024-12-13T01:31:18.475232102Z" level=error msg="ContainerStatus for \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\": not found" Dec 13 01:31:18.477277 containerd[1440]: time="2024-12-13T01:31:18.475686384Z" level=error msg="ContainerStatus for \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\": not found" Dec 13 01:31:18.477277 containerd[1440]: time="2024-12-13T01:31:18.475987785Z" level=error msg="ContainerStatus for \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\": not found" Dec 13 01:31:18.477277 containerd[1440]: time="2024-12-13T01:31:18.476944547Z" level=info msg="RemoveContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\"" Dec 13 01:31:18.477439 kubelet[2550]: E1213 01:31:18.475478 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\": not found" containerID="6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6" Dec 13 01:31:18.477439 kubelet[2550]: I1213 01:31:18.475511 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6"} err="failed to get container status \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d7bf5c63fe9ed30930a657becc76f77c501350adfb09e9abd2e0863fcd0c7b6\": not found" Dec 13 01:31:18.477439 kubelet[2550]: I1213 01:31:18.475523 2550 scope.go:117] "RemoveContainer" containerID="9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f" Dec 13 01:31:18.477439 kubelet[2550]: E1213 01:31:18.475810 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\": not found" containerID="9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f" Dec 13 01:31:18.477439 kubelet[2550]: I1213 01:31:18.475838 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f"} err="failed to get container status \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d4204c0f00029e7f2ee3a0dbf17b7cc4cba8acf8c4604c62070b908743caf0f\": not found" Dec 13 01:31:18.477439 kubelet[2550]: I1213 01:31:18.475848 2550 scope.go:117] "RemoveContainer" containerID="f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3" Dec 13 01:31:18.477571 kubelet[2550]: E1213 01:31:18.476100 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\": not found" containerID="f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3" Dec 13 01:31:18.477571 kubelet[2550]: I1213 01:31:18.476126 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3"} err="failed to get container status \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f00c8251c68f3ab3ee268b07dd48861dfd9285cc8785aa2f07dcb97eae44c9b3\": not found" Dec 13 01:31:18.477571 kubelet[2550]: I1213 01:31:18.476137 2550 scope.go:117] "RemoveContainer" containerID="44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318" Dec 13 01:31:18.479175 containerd[1440]: time="2024-12-13T01:31:18.479142393Z" level=info msg="RemoveContainer for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" returns successfully" Dec 13 01:31:18.479359 kubelet[2550]: I1213 01:31:18.479335 2550 scope.go:117] "RemoveContainer" containerID="44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318" Dec 13 01:31:18.479648 containerd[1440]: time="2024-12-13T01:31:18.479610514Z" level=error msg="ContainerStatus for \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\": not found" Dec 13 01:31:18.479757 kubelet[2550]: E1213 01:31:18.479733 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\": not found" containerID="44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318" Dec 13 01:31:18.479791 kubelet[2550]: I1213 01:31:18.479764 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318"} err="failed to get container status \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\": rpc error: code = NotFound desc = an error occurred when try to find container \"44c5e0a00273b1681b20d5df51979414d07918cc4b5921d19e2358e698795318\": not found" Dec 13 01:31:19.227468 kubelet[2550]: I1213 01:31:19.227071 2550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5006f994-9115-4f8a-b832-3f266baa2c01" path="/var/lib/kubelet/pods/5006f994-9115-4f8a-b832-3f266baa2c01/volumes" Dec 13 01:31:19.227770 kubelet[2550]: I1213 01:31:19.227524 2550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" path="/var/lib/kubelet/pods/eb6672be-00ec-4007-9f40-aeadb88f6836/volumes" Dec 13 01:31:19.298470 sshd[4201]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:19.308892 systemd[1]: sshd@23-10.0.0.66:22-10.0.0.1:45988.service: Deactivated successfully. Dec 13 01:31:19.310568 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:31:19.310869 systemd[1]: session-24.scope: Consumed 1.351s CPU time. Dec 13 01:31:19.313478 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:31:19.314532 systemd[1]: Started sshd@24-10.0.0.66:22-10.0.0.1:45994.service - OpenSSH per-connection server daemon (10.0.0.1:45994). Dec 13 01:31:19.316727 systemd-logind[1422]: Removed session 24. Dec 13 01:31:19.351366 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 45994 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:19.352878 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:19.358981 systemd-logind[1422]: New session 25 of user core. Dec 13 01:31:19.368536 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:31:19.894144 sshd[4361]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:19.902461 systemd[1]: sshd@24-10.0.0.66:22-10.0.0.1:45994.service: Deactivated successfully. Dec 13 01:31:19.907007 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:31:19.909556 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:31:19.919851 systemd[1]: Started sshd@25-10.0.0.66:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). Dec 13 01:31:19.922498 systemd-logind[1422]: Removed session 25. Dec 13 01:31:19.928298 kubelet[2550]: I1213 01:31:19.928233 2550 topology_manager.go:215] "Topology Admit Handler" podUID="a7d9b352-e218-4b9e-bac8-8724a7a04881" podNamespace="kube-system" podName="cilium-f9r99" Dec 13 01:31:19.928298 kubelet[2550]: E1213 01:31:19.928301 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="mount-cgroup" Dec 13 01:31:19.928298 kubelet[2550]: E1213 01:31:19.928312 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="mount-bpf-fs" Dec 13 01:31:19.928479 kubelet[2550]: E1213 01:31:19.928320 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="clean-cilium-state" Dec 13 01:31:19.928479 kubelet[2550]: E1213 01:31:19.928327 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="cilium-agent" Dec 13 01:31:19.928479 kubelet[2550]: E1213 01:31:19.928335 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="apply-sysctl-overwrites" Dec 13 01:31:19.928479 kubelet[2550]: E1213 01:31:19.928342 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5006f994-9115-4f8a-b832-3f266baa2c01" containerName="cilium-operator" Dec 13 01:31:19.928479 kubelet[2550]: I1213 01:31:19.928369 2550 memory_manager.go:354] "RemoveStaleState removing state" podUID="5006f994-9115-4f8a-b832-3f266baa2c01" containerName="cilium-operator" Dec 13 01:31:19.928479 kubelet[2550]: I1213 01:31:19.928376 2550 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6672be-00ec-4007-9f40-aeadb88f6836" containerName="cilium-agent" Dec 13 01:31:19.938232 systemd[1]: Created slice kubepods-burstable-poda7d9b352_e218_4b9e_bac8_8724a7a04881.slice - libcontainer container kubepods-burstable-poda7d9b352_e218_4b9e_bac8_8724a7a04881.slice. Dec 13 01:31:19.961170 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:19.962564 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:19.966825 systemd-logind[1422]: New session 26 of user core. Dec 13 01:31:19.979562 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:31:19.999534 kubelet[2550]: I1213 01:31:19.999500 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-bpf-maps\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999613 kubelet[2550]: I1213 01:31:19.999557 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-hostproc\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999613 kubelet[2550]: I1213 01:31:19.999581 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7d9b352-e218-4b9e-bac8-8724a7a04881-cilium-ipsec-secrets\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999613 kubelet[2550]: I1213 01:31:19.999602 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-xtables-lock\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999695 kubelet[2550]: I1213 01:31:19.999621 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd482\" (UniqueName: \"kubernetes.io/projected/a7d9b352-e218-4b9e-bac8-8724a7a04881-kube-api-access-rd482\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999695 kubelet[2550]: I1213 01:31:19.999640 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7d9b352-e218-4b9e-bac8-8724a7a04881-hubble-tls\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999695 kubelet[2550]: I1213 01:31:19.999670 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-cilium-cgroup\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999695 kubelet[2550]: I1213 01:31:19.999692 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-lib-modules\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999781 kubelet[2550]: I1213 01:31:19.999712 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-host-proc-sys-net\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999781 kubelet[2550]: I1213 01:31:19.999730 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-etc-cni-netd\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999781 kubelet[2550]: I1213 01:31:19.999748 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-cni-path\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999781 kubelet[2550]: I1213 01:31:19.999766 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7d9b352-e218-4b9e-bac8-8724a7a04881-clustermesh-secrets\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999862 kubelet[2550]: I1213 01:31:19.999787 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7d9b352-e218-4b9e-bac8-8724a7a04881-cilium-config-path\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999862 kubelet[2550]: I1213 01:31:19.999805 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-host-proc-sys-kernel\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:19.999862 kubelet[2550]: I1213 01:31:19.999823 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7d9b352-e218-4b9e-bac8-8724a7a04881-cilium-run\") pod \"cilium-f9r99\" (UID: \"a7d9b352-e218-4b9e-bac8-8724a7a04881\") " pod="kube-system/cilium-f9r99" Dec 13 01:31:20.028774 sshd[4374]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:20.039739 systemd[1]: sshd@25-10.0.0.66:22-10.0.0.1:46010.service: Deactivated successfully. Dec 13 01:31:20.041215 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:31:20.043684 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:31:20.043950 systemd[1]: Started sshd@26-10.0.0.66:22-10.0.0.1:46020.service - OpenSSH per-connection server daemon (10.0.0.1:46020). Dec 13 01:31:20.045520 systemd-logind[1422]: Removed session 26. Dec 13 01:31:20.077542 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 46020 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:31:20.078717 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:20.082431 systemd-logind[1422]: New session 27 of user core. Dec 13 01:31:20.100605 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:31:20.241758 kubelet[2550]: E1213 01:31:20.240823 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:20.242094 containerd[1440]: time="2024-12-13T01:31:20.241310699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9r99,Uid:a7d9b352-e218-4b9e-bac8-8724a7a04881,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:20.258137 containerd[1440]: time="2024-12-13T01:31:20.258046482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:20.258137 containerd[1440]: time="2024-12-13T01:31:20.258108963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:20.258137 containerd[1440]: time="2024-12-13T01:31:20.258123563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:20.258299 containerd[1440]: time="2024-12-13T01:31:20.258221643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:20.279201 systemd[1]: Started cri-containerd-55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422.scope - libcontainer container 55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422. Dec 13 01:31:20.296122 containerd[1440]: time="2024-12-13T01:31:20.296075427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9r99,Uid:a7d9b352-e218-4b9e-bac8-8724a7a04881,Namespace:kube-system,Attempt:0,} returns sandbox id \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\"" Dec 13 01:31:20.297226 kubelet[2550]: E1213 01:31:20.296762 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:20.300591 containerd[1440]: time="2024-12-13T01:31:20.300556124Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:31:20.309944 containerd[1440]: time="2024-12-13T01:31:20.309901159Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53\"" Dec 13 01:31:20.310724 containerd[1440]: time="2024-12-13T01:31:20.310689202Z" level=info msg="StartContainer for \"a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53\"" Dec 13 01:31:20.337597 systemd[1]: Started cri-containerd-a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53.scope - libcontainer container a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53. Dec 13 01:31:20.357859 containerd[1440]: time="2024-12-13T01:31:20.357815341Z" level=info msg="StartContainer for \"a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53\" returns successfully" Dec 13 01:31:20.388849 systemd[1]: cri-containerd-a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53.scope: Deactivated successfully. Dec 13 01:31:20.422464 containerd[1440]: time="2024-12-13T01:31:20.422327106Z" level=info msg="shim disconnected" id=a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53 namespace=k8s.io Dec 13 01:31:20.422626 containerd[1440]: time="2024-12-13T01:31:20.422478587Z" level=warning msg="cleaning up after shim disconnected" id=a49ab1aa61df988473e8aef6be9ca21f94c5990c34f58d01129efc3917a22f53 namespace=k8s.io Dec 13 01:31:20.422626 containerd[1440]: time="2024-12-13T01:31:20.422497267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:20.437691 kubelet[2550]: E1213 01:31:20.437650 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:20.440314 containerd[1440]: time="2024-12-13T01:31:20.440260734Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:31:20.450272 containerd[1440]: time="2024-12-13T01:31:20.450197932Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1\"" Dec 13 01:31:20.451011 containerd[1440]: time="2024-12-13T01:31:20.450919255Z" level=info msg="StartContainer for \"0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1\"" Dec 13 01:31:20.481568 systemd[1]: Started cri-containerd-0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1.scope - libcontainer container 0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1. Dec 13 01:31:20.500190 containerd[1440]: time="2024-12-13T01:31:20.499991641Z" level=info msg="StartContainer for \"0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1\" returns successfully" Dec 13 01:31:20.507802 systemd[1]: cri-containerd-0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1.scope: Deactivated successfully. Dec 13 01:31:20.528726 containerd[1440]: time="2024-12-13T01:31:20.528674710Z" level=info msg="shim disconnected" id=0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1 namespace=k8s.io Dec 13 01:31:20.528726 containerd[1440]: time="2024-12-13T01:31:20.528725950Z" level=warning msg="cleaning up after shim disconnected" id=0791c39b38e643cb150d1712fd400b4f4136838cf2729c5a1f4405f0f8e44ff1 namespace=k8s.io Dec 13 01:31:20.528726 containerd[1440]: time="2024-12-13T01:31:20.528734070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:21.441175 kubelet[2550]: E1213 01:31:21.441132 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:21.445035 containerd[1440]: time="2024-12-13T01:31:21.444991290Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:31:21.462780 containerd[1440]: time="2024-12-13T01:31:21.462730806Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d\"" Dec 13 01:31:21.463484 containerd[1440]: time="2024-12-13T01:31:21.463208608Z" level=info msg="StartContainer for \"d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d\"" Dec 13 01:31:21.495652 systemd[1]: Started cri-containerd-d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d.scope - libcontainer container d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d. Dec 13 01:31:21.518189 systemd[1]: cri-containerd-d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d.scope: Deactivated successfully. Dec 13 01:31:21.519315 containerd[1440]: time="2024-12-13T01:31:21.518655647Z" level=info msg="StartContainer for \"d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d\" returns successfully" Dec 13 01:31:21.537300 containerd[1440]: time="2024-12-13T01:31:21.537245447Z" level=info msg="shim disconnected" id=d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d namespace=k8s.io Dec 13 01:31:21.537300 containerd[1440]: time="2024-12-13T01:31:21.537300087Z" level=warning msg="cleaning up after shim disconnected" id=d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d namespace=k8s.io Dec 13 01:31:21.537482 containerd[1440]: time="2024-12-13T01:31:21.537310367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:22.105958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d704b45ccfb22f8ad5b0e97c6de9a9a0122c342301919dfc305455d9491c683d-rootfs.mount: Deactivated successfully. Dec 13 01:31:22.284037 kubelet[2550]: E1213 01:31:22.284011 2550 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:31:22.445350 kubelet[2550]: E1213 01:31:22.445102 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:22.447375 containerd[1440]: time="2024-12-13T01:31:22.447103893Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:31:22.457922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749564677.mount: Deactivated successfully. Dec 13 01:31:22.459688 containerd[1440]: time="2024-12-13T01:31:22.459639593Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483\"" Dec 13 01:31:22.460146 containerd[1440]: time="2024-12-13T01:31:22.460120755Z" level=info msg="StartContainer for \"56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483\"" Dec 13 01:31:22.487638 systemd[1]: Started cri-containerd-56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483.scope - libcontainer container 56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483. Dec 13 01:31:22.507361 systemd[1]: cri-containerd-56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483.scope: Deactivated successfully. Dec 13 01:31:22.508368 containerd[1440]: time="2024-12-13T01:31:22.508314906Z" level=info msg="StartContainer for \"56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483\" returns successfully" Dec 13 01:31:22.526255 containerd[1440]: time="2024-12-13T01:31:22.526198791Z" level=info msg="shim disconnected" id=56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483 namespace=k8s.io Dec 13 01:31:22.526255 containerd[1440]: time="2024-12-13T01:31:22.526254231Z" level=warning msg="cleaning up after shim disconnected" id=56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483 namespace=k8s.io Dec 13 01:31:22.526428 containerd[1440]: time="2024-12-13T01:31:22.526263992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:23.105627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56215510edb140e880cd8634d56939700f94b051f3233851977f5fe5bc218483-rootfs.mount: Deactivated successfully. Dec 13 01:31:23.449157 kubelet[2550]: E1213 01:31:23.448925 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:23.452325 containerd[1440]: time="2024-12-13T01:31:23.452269392Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:31:23.466219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462258663.mount: Deactivated successfully. Dec 13 01:31:23.467817 containerd[1440]: time="2024-12-13T01:31:23.466391626Z" level=info msg="CreateContainer within sandbox \"55adc9bf80842a395a2f49f097b4a5aaf8339d48d2a4ce8689560c684bc4b422\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab\"" Dec 13 01:31:23.468630 containerd[1440]: time="2024-12-13T01:31:23.468597918Z" level=info msg="StartContainer for \"df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab\"" Dec 13 01:31:23.498555 systemd[1]: Started cri-containerd-df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab.scope - libcontainer container df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab. Dec 13 01:31:23.520740 containerd[1440]: time="2024-12-13T01:31:23.520697431Z" level=info msg="StartContainer for \"df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab\" returns successfully" Dec 13 01:31:23.763452 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:31:24.454857 kubelet[2550]: E1213 01:31:24.454568 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:24.469960 kubelet[2550]: I1213 01:31:24.469913 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f9r99" podStartSLOduration=5.46987431 podStartE2EDuration="5.46987431s" podCreationTimestamp="2024-12-13 01:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:24.469732029 +0000 UTC m=+87.333243097" watchObservedRunningTime="2024-12-13 01:31:24.46987431 +0000 UTC m=+87.333385378" Dec 13 01:31:26.242469 kubelet[2550]: E1213 01:31:26.242430 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:26.605764 systemd-networkd[1382]: lxc_health: Link UP Dec 13 01:31:26.610400 systemd-networkd[1382]: lxc_health: Gained carrier Dec 13 01:31:28.195609 systemd-networkd[1382]: lxc_health: Gained IPv6LL Dec 13 01:31:28.246924 kubelet[2550]: E1213 01:31:28.246882 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:28.462703 kubelet[2550]: E1213 01:31:28.462562 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:29.223511 kubelet[2550]: E1213 01:31:29.223442 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:30.741199 kubelet[2550]: E1213 01:31:30.741144 2550 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50950->127.0.0.1:43209: write tcp 127.0.0.1:50950->127.0.0.1:43209: write: broken pipe Dec 13 01:31:32.804782 systemd[1]: run-containerd-runc-k8s.io-df1b46624fc18557b1af14d500f8605f2c03b6df244fc292f6c0584c8674bbab-runc.G6x8VM.mount: Deactivated successfully. Dec 13 01:31:32.849991 sshd[4382]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:32.853391 systemd[1]: sshd@26-10.0.0.66:22-10.0.0.1:46020.service: Deactivated successfully. Dec 13 01:31:32.855179 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:31:32.856595 systemd-logind[1422]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:31:32.857699 systemd-logind[1422]: Removed session 27.