Dec 13 01:16:38.876792 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:16:38.876812 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:16:38.876822 kernel: KASLR enabled Dec 13 01:16:38.876828 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:38.876833 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:16:38.876839 kernel: random: crng init done Dec 13 01:16:38.876846 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:38.876852 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:16:38.876858 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:16:38.876865 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876871 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876877 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876883 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876889 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876896 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876904 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876910 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876916 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:38.876923 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:16:38.876929 kernel: NUMA: Failed to initialise from firmware Dec 13 01:16:38.876946 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:38.876952 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:16:38.876959 kernel: Zone ranges: Dec 13 01:16:38.876965 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:38.876972 kernel: DMA32 empty Dec 13 01:16:38.876979 kernel: Normal empty Dec 13 01:16:38.876986 kernel: Movable zone start for each node Dec 13 01:16:38.876992 kernel: Early memory node ranges Dec 13 01:16:38.876999 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:16:38.877005 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:16:38.877012 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:16:38.877018 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:16:38.877024 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:16:38.877030 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:16:38.877037 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:16:38.877043 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:38.877049 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:16:38.877056 kernel: psci: probing for conduit method from ACPI. Dec 13 01:16:38.877062 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:16:38.877069 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:16:38.877077 kernel: psci: Trusted OS migration not required Dec 13 01:16:38.877084 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:16:38.877091 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:16:38.877099 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:16:38.877105 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:16:38.877113 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:16:38.877119 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:16:38.877126 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:16:38.877132 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:16:38.877139 kernel: CPU features: detected: Spectre-v4 Dec 13 01:16:38.877145 kernel: CPU features: detected: Spectre-BHB Dec 13 01:16:38.877152 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:16:38.877159 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:16:38.877167 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:16:38.877174 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:16:38.877180 kernel: alternatives: applying boot alternatives Dec 13 01:16:38.877188 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:16:38.877195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:38.877202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:38.877209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:16:38.877215 kernel: Fallback order for Node 0: 0 Dec 13 01:16:38.877222 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:16:38.877228 kernel: Policy zone: DMA Dec 13 01:16:38.877235 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:38.877243 kernel: software IO TLB: area num 4. Dec 13 01:16:38.877250 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:16:38.877257 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:16:38.877264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:16:38.877271 kernel: trace event string verifier disabled Dec 13 01:16:38.877277 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:38.877284 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:38.877304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:16:38.877311 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:38.877318 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:38.877325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:38.877332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:16:38.877340 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:16:38.877347 kernel: GICv3: 256 SPIs implemented Dec 13 01:16:38.877353 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:16:38.877360 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:16:38.877366 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:16:38.877373 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:16:38.877380 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:16:38.877386 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:16:38.877393 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:16:38.877400 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:16:38.877407 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:16:38.877416 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:38.877423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:38.877429 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:16:38.877436 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:16:38.877443 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:16:38.877450 kernel: arm-pv: using stolen time PV Dec 13 01:16:38.877465 kernel: Console: colour dummy device 80x25 Dec 13 01:16:38.877472 kernel: ACPI: Core revision 20230628 Dec 13 01:16:38.877479 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:16:38.877487 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:38.877495 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:38.877502 kernel: landlock: Up and running. Dec 13 01:16:38.877509 kernel: SELinux: Initializing. Dec 13 01:16:38.877515 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:38.877522 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:38.877529 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:38.877536 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:38.877543 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:38.877550 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:38.877558 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:16:38.877565 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:16:38.877571 kernel: Remapping and enabling EFI services. Dec 13 01:16:38.877578 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:38.877585 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:16:38.877592 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:16:38.877599 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:16:38.877606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:38.877612 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:16:38.877619 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:16:38.877627 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:16:38.877634 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:16:38.877646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:38.877654 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:16:38.877661 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:16:38.877668 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:16:38.877675 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:16:38.877683 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:38.877690 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:16:38.877698 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:16:38.877705 kernel: SMP: Total of 4 processors activated. Dec 13 01:16:38.877712 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:16:38.877720 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:16:38.877727 kernel: CPU features: detected: Common not Private translations Dec 13 01:16:38.877734 kernel: CPU features: detected: CRC32 instructions Dec 13 01:16:38.877742 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:16:38.877749 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:16:38.877757 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:16:38.877764 kernel: CPU features: detected: Privileged Access Never Dec 13 01:16:38.877772 kernel: CPU features: detected: RAS Extension Support Dec 13 01:16:38.877779 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:16:38.877786 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:16:38.877793 kernel: alternatives: applying system-wide alternatives Dec 13 01:16:38.877800 kernel: devtmpfs: initialized Dec 13 01:16:38.877808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:38.877815 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:16:38.877823 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:38.877830 kernel: SMBIOS 3.0.0 present. Dec 13 01:16:38.877838 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:16:38.877845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:38.877852 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:16:38.877859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:16:38.877867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:16:38.877874 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:38.877881 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:38.877890 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:38.877897 kernel: cpuidle: using governor menu Dec 13 01:16:38.877904 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:16:38.877911 kernel: ASID allocator initialised with 32768 entries Dec 13 01:16:38.877919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:38.877926 kernel: Serial: AMBA PL011 UART driver Dec 13 01:16:38.877937 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:16:38.877945 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:16:38.877952 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:16:38.877961 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:38.877968 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:38.877975 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:16:38.877982 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:16:38.877989 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:38.877996 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:38.878004 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:16:38.878011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:16:38.878018 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:38.878026 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:38.878033 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:38.878040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:38.878047 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:16:38.878060 kernel: ACPI: Interpreter enabled Dec 13 01:16:38.878067 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:16:38.878074 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:16:38.878081 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:16:38.878088 kernel: printk: console [ttyAMA0] enabled Dec 13 01:16:38.878096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:38.878224 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:38.878296 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:16:38.878361 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:16:38.878425 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:16:38.878504 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:16:38.878514 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:16:38.878525 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:38.878596 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:16:38.878657 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:16:38.878716 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:16:38.878782 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:38.878861 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:16:38.878950 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:38.879035 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:16:38.879119 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:16:38.879187 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:16:38.879255 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:16:38.879322 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:16:38.879388 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:16:38.879448 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:16:38.879525 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:16:38.879583 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:16:38.879592 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:16:38.879600 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:16:38.879607 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:16:38.879614 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:16:38.879621 kernel: iommu: Default domain type: Translated Dec 13 01:16:38.879629 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:16:38.879638 kernel: efivars: Registered efivars operations Dec 13 01:16:38.879645 kernel: vgaarb: loaded Dec 13 01:16:38.879653 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:16:38.879660 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:38.879667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:38.879675 kernel: pnp: PnP ACPI init Dec 13 01:16:38.879753 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:16:38.879763 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:16:38.879772 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:38.879780 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:38.879787 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:16:38.879794 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:38.879801 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:16:38.879808 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:16:38.879816 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:16:38.879823 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:38.879830 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:38.879838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:38.879846 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:38.879853 kernel: kvm [1]: HYP mode not available Dec 13 01:16:38.879860 kernel: Initialise system trusted keyrings Dec 13 01:16:38.879867 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:16:38.879874 kernel: Key type asymmetric registered Dec 13 01:16:38.879881 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:38.879888 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:16:38.879896 kernel: io scheduler mq-deadline registered Dec 13 01:16:38.879904 kernel: io scheduler kyber registered Dec 13 01:16:38.879911 kernel: io scheduler bfq registered Dec 13 01:16:38.879919 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:16:38.879926 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:16:38.879940 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:16:38.880007 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:16:38.880017 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:38.880024 kernel: thunder_xcv, ver 1.0 Dec 13 01:16:38.880031 kernel: thunder_bgx, ver 1.0 Dec 13 01:16:38.880040 kernel: nicpf, ver 1.0 Dec 13 01:16:38.880047 kernel: nicvf, ver 1.0 Dec 13 01:16:38.880120 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:16:38.880182 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:16:38 UTC (1734052598) Dec 13 01:16:38.880192 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:16:38.880199 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:16:38.880207 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:16:38.880214 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:16:38.880223 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:38.880230 kernel: Segment Routing with IPv6 Dec 13 01:16:38.880237 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:38.880244 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:38.880251 kernel: Key type dns_resolver registered Dec 13 01:16:38.880258 kernel: registered taskstats version 1 Dec 13 01:16:38.880265 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:38.880273 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:16:38.880280 kernel: Key type .fscrypt registered Dec 13 01:16:38.880288 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:38.880296 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:16:38.880303 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:38.880310 kernel: ima: No architecture policies found Dec 13 01:16:38.880317 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:16:38.880324 kernel: clk: Disabling unused clocks Dec 13 01:16:38.880331 kernel: Freeing unused kernel memory: 39360K Dec 13 01:16:38.880338 kernel: Run /init as init process Dec 13 01:16:38.880345 kernel: with arguments: Dec 13 01:16:38.880353 kernel: /init Dec 13 01:16:38.880360 kernel: with environment: Dec 13 01:16:38.880367 kernel: HOME=/ Dec 13 01:16:38.880374 kernel: TERM=linux Dec 13 01:16:38.880381 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:38.880390 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:38.880399 systemd[1]: Detected virtualization kvm. Dec 13 01:16:38.880407 systemd[1]: Detected architecture arm64. Dec 13 01:16:38.880416 systemd[1]: Running in initrd. Dec 13 01:16:38.880423 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:38.880431 systemd[1]: Hostname set to . Dec 13 01:16:38.880439 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:38.880473 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:38.880481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:38.880489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:38.880497 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:38.880507 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:38.880515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:38.880523 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:38.880532 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:38.880540 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:38.880547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:38.880557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:38.880565 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:38.880572 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:38.880580 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:38.880588 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:38.880595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:38.880603 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:38.880611 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:38.880619 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:38.880628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:38.880635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:38.880643 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:38.880651 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:38.880659 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:38.880667 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:38.880674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:38.880682 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:38.880690 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:38.880699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:38.880706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:38.880714 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:38.880722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:38.880729 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:38.880755 systemd-journald[238]: Collecting audit messages is disabled. Dec 13 01:16:38.880774 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:38.880782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:38.880791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:38.880799 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:38.880808 systemd-journald[238]: Journal started Dec 13 01:16:38.880827 systemd-journald[238]: Runtime Journal (/run/log/journal/e7e541f08e714cd39f8f03276ff5f49d) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:16:38.869469 systemd-modules-load[239]: Inserted module 'overlay' Dec 13 01:16:38.883500 kernel: Bridge firewalling registered Dec 13 01:16:38.883544 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:38.882414 systemd-modules-load[239]: Inserted module 'br_netfilter' Dec 13 01:16:38.884993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:38.886173 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:38.890102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:38.891646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:38.893991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:38.901547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:38.903806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:38.904963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:38.918656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:38.919596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:38.921756 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:38.935371 dracut-cmdline[278]: dracut-dracut-053 Dec 13 01:16:38.937882 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:16:38.944401 systemd-resolved[275]: Positive Trust Anchors: Dec 13 01:16:38.944418 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:38.944449 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:38.949327 systemd-resolved[275]: Defaulting to hostname 'linux'. Dec 13 01:16:38.950265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:38.951920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:39.006498 kernel: SCSI subsystem initialized Dec 13 01:16:39.010480 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:39.019481 kernel: iscsi: registered transport (tcp) Dec 13 01:16:39.034496 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:39.034546 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:39.077532 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:39.092669 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:39.107723 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:39.107782 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:39.107805 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:39.158492 kernel: raid6: neonx8 gen() 15773 MB/s Dec 13 01:16:39.175472 kernel: raid6: neonx4 gen() 15615 MB/s Dec 13 01:16:39.192476 kernel: raid6: neonx2 gen() 13253 MB/s Dec 13 01:16:39.209472 kernel: raid6: neonx1 gen() 10441 MB/s Dec 13 01:16:39.226475 kernel: raid6: int64x8 gen() 6955 MB/s Dec 13 01:16:39.243479 kernel: raid6: int64x4 gen() 7338 MB/s Dec 13 01:16:39.260484 kernel: raid6: int64x2 gen() 6117 MB/s Dec 13 01:16:39.277486 kernel: raid6: int64x1 gen() 5050 MB/s Dec 13 01:16:39.277513 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Dec 13 01:16:39.294493 kernel: raid6: .... xor() 11879 MB/s, rmw enabled Dec 13 01:16:39.294521 kernel: raid6: using neon recovery algorithm Dec 13 01:16:39.299472 kernel: xor: measuring software checksum speed Dec 13 01:16:39.299488 kernel: 8regs : 19168 MB/sec Dec 13 01:16:39.299498 kernel: 32regs : 18544 MB/sec Dec 13 01:16:39.300779 kernel: arm64_neon : 26441 MB/sec Dec 13 01:16:39.300810 kernel: xor: using function: arm64_neon (26441 MB/sec) Dec 13 01:16:39.355977 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:39.368364 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:39.379662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:39.394306 systemd-udevd[460]: Using default interface naming scheme 'v255'. Dec 13 01:16:39.397399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:39.414188 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:39.425561 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Dec 13 01:16:39.454208 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:39.464598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:39.504650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:39.510632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:39.523066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:39.524957 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:39.527588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:39.530160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:39.537629 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:39.548150 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:39.560683 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:16:39.575403 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:16:39.575532 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:39.575544 kernel: GPT:9289727 != 19775487 Dec 13 01:16:39.575554 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:39.575570 kernel: GPT:9289727 != 19775487 Dec 13 01:16:39.575579 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:39.575590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:39.574955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:39.575093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:39.576975 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:39.577845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:39.578159 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:39.580421 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:39.593522 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) Dec 13 01:16:39.598834 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (509) Dec 13 01:16:39.595788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:39.607575 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:16:39.608668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:39.615110 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:16:39.624502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:39.627987 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:16:39.628891 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:16:39.641608 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:39.643558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:39.647434 disk-uuid[549]: Primary Header is updated. Dec 13 01:16:39.647434 disk-uuid[549]: Secondary Entries is updated. Dec 13 01:16:39.647434 disk-uuid[549]: Secondary Header is updated. Dec 13 01:16:39.652479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:39.667654 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:40.666496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:40.669732 disk-uuid[550]: The operation has completed successfully. Dec 13 01:16:40.688150 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:40.688246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:40.717624 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:40.720468 sh[573]: Success Dec 13 01:16:40.732492 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:16:40.769902 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:40.771429 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:40.772873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:40.783174 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:16:40.783210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:40.783221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:40.783232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:40.784468 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:40.787810 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:40.788646 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:40.789330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:40.791666 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:40.801696 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:40.801742 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:40.801758 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:40.804515 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:40.811541 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:40.813482 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:40.818904 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:40.827652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:40.893751 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:40.907682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:40.943596 systemd-networkd[760]: lo: Link UP Dec 13 01:16:40.944297 systemd-networkd[760]: lo: Gained carrier Dec 13 01:16:40.945706 systemd-networkd[760]: Enumeration completed Dec 13 01:16:40.945824 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:40.946306 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:40.946309 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:40.947008 systemd[1]: Reached target network.target - Network. Dec 13 01:16:40.947163 systemd-networkd[760]: eth0: Link UP Dec 13 01:16:40.947166 systemd-networkd[760]: eth0: Gained carrier Dec 13 01:16:40.947173 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:40.961811 ignition[671]: Ignition 2.19.0 Dec 13 01:16:40.961821 ignition[671]: Stage: fetch-offline Dec 13 01:16:40.961860 ignition[671]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:40.961886 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:40.962057 ignition[671]: parsed url from cmdline: "" Dec 13 01:16:40.962061 ignition[671]: no config URL provided Dec 13 01:16:40.962065 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:40.962074 ignition[671]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:40.962098 ignition[671]: op(1): [started] loading QEMU firmware config module Dec 13 01:16:40.962102 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:16:40.972481 ignition[671]: op(1): [finished] loading QEMU firmware config module Dec 13 01:16:40.972517 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:41.012257 ignition[671]: parsing config with SHA512: 0df92a02ee70b551edabd57e21de03a9e1f1449d75d17994513a59f88206c0adb324a28763b5e96291ee4f8d93f249471191e6535ff2dba2b6139082088b946b Dec 13 01:16:41.018391 unknown[671]: fetched base config from "system" Dec 13 01:16:41.018403 unknown[671]: fetched user config from "qemu" Dec 13 01:16:41.018930 ignition[671]: fetch-offline: fetch-offline passed Dec 13 01:16:41.018996 ignition[671]: Ignition finished successfully Dec 13 01:16:41.020998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:41.022139 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:16:41.032606 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:41.043376 ignition[771]: Ignition 2.19.0 Dec 13 01:16:41.043392 ignition[771]: Stage: kargs Dec 13 01:16:41.043609 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:41.043619 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:41.044607 ignition[771]: kargs: kargs passed Dec 13 01:16:41.044658 ignition[771]: Ignition finished successfully Dec 13 01:16:41.048340 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:41.057618 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:41.067720 ignition[779]: Ignition 2.19.0 Dec 13 01:16:41.067729 ignition[779]: Stage: disks Dec 13 01:16:41.067889 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:41.067899 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:41.068856 ignition[779]: disks: disks passed Dec 13 01:16:41.068905 ignition[779]: Ignition finished successfully Dec 13 01:16:41.071238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:41.072694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:41.073529 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:41.074996 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:41.076380 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:41.077690 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:41.088696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:41.100372 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:16:41.103981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:41.111585 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:41.161389 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:41.162625 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:41.162503 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:41.172558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:41.174180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:41.175266 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:41.175340 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:41.175405 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:41.181332 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:41.183147 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Dec 13 01:16:41.183285 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:41.187401 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:41.187423 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:41.187442 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:41.188485 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:41.190016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:41.229006 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:41.233087 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:41.236808 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:41.240302 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:41.308317 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:41.327577 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:41.329086 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:41.334495 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:41.351390 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:41.352826 ignition[910]: INFO : Ignition 2.19.0 Dec 13 01:16:41.352826 ignition[910]: INFO : Stage: mount Dec 13 01:16:41.352826 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:41.352826 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:41.352826 ignition[910]: INFO : mount: mount passed Dec 13 01:16:41.352826 ignition[910]: INFO : Ignition finished successfully Dec 13 01:16:41.353772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:41.362601 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:41.781712 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:41.796642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:41.802674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Dec 13 01:16:41.802719 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:41.802740 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:41.803846 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:41.805476 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:41.806776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:41.822390 ignition[940]: INFO : Ignition 2.19.0 Dec 13 01:16:41.822390 ignition[940]: INFO : Stage: files Dec 13 01:16:41.823770 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:41.823770 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:41.823770 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:41.826779 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:41.826779 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:41.829017 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:41.829017 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:41.829017 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:41.827381 unknown[940]: wrote ssh authorized keys file for user: core Dec 13 01:16:41.833299 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:41.833299 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:16:41.833299 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:16:41.833299 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:16:41.905856 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:42.125728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:16:42.125728 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:42.128668 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:16:42.490499 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:16:42.686582 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:16:42.688100 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:16:42.702592 systemd-networkd[760]: eth0: Gained IPv6LL Dec 13 01:16:42.930805 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:16:43.260755 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:16:43.260755 ignition[940]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 01:16:43.263438 ignition[940]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:43.287057 ignition[940]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:43.290839 ignition[940]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:43.291965 ignition[940]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:43.291965 ignition[940]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:43.291965 ignition[940]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:43.291965 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:43.291965 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:43.291965 ignition[940]: INFO : files: files passed Dec 13 01:16:43.291965 ignition[940]: INFO : Ignition finished successfully Dec 13 01:16:43.292941 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:43.299819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:43.301748 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:43.304391 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:43.305188 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:43.309263 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:16:43.311367 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:43.311367 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:43.314260 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:43.315483 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:43.317430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:43.330611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:43.349890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:43.350023 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:43.351681 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:43.353065 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:43.354376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:43.355127 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:43.371395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:43.382643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:43.390631 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:43.391572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:43.393074 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:43.394430 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:43.394567 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:43.396412 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:43.397883 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:43.399161 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:43.400392 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:43.401846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:43.403484 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:43.404856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:43.406313 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:43.407878 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:43.409132 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:43.410202 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:43.410315 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:43.412043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:43.413414 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:43.414930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:43.415040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:43.416395 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:43.416518 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:43.418554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:43.418671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:43.420056 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:43.421199 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:43.424513 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:43.425435 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:43.426973 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:43.428147 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:43.428237 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:43.429343 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:43.429422 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:43.430525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:43.430629 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:43.432002 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:43.432103 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:43.446695 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:43.448698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:43.449362 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:43.449490 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:43.450793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:43.450882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:43.455236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:43.456143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:43.460366 ignition[996]: INFO : Ignition 2.19.0 Dec 13 01:16:43.460366 ignition[996]: INFO : Stage: umount Dec 13 01:16:43.461712 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:43.461712 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:43.461712 ignition[996]: INFO : umount: umount passed Dec 13 01:16:43.461712 ignition[996]: INFO : Ignition finished successfully Dec 13 01:16:43.462368 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:43.464150 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:43.464243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:43.465422 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:43.467253 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:43.467319 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:43.468601 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:43.468641 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:43.470907 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:43.470961 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:43.472312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:43.472354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:43.473791 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:43.475036 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:43.482351 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:43.482485 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:43.484525 systemd-networkd[760]: eth0: DHCPv6 lease lost Dec 13 01:16:43.485722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:43.485780 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:43.487630 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:43.487758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:43.490286 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:43.490341 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:43.499587 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:43.500232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:43.500285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:43.501760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:43.501799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:43.503059 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:43.503095 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:43.504688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:43.513078 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:43.513220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:43.521164 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:43.521307 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:43.523082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:43.523119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:43.524240 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:43.524267 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:43.525577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:43.525617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:43.527616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:43.527655 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:43.529527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:43.529567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:43.543677 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:43.545222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:43.545278 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:43.546813 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:16:43.546849 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:43.548411 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:43.548450 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:43.549935 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:43.549972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:43.551831 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:43.552508 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:43.553735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:43.553810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:43.555508 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:43.556278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:43.556335 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:43.558534 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:43.566733 systemd[1]: Switching root. Dec 13 01:16:43.597940 systemd-journald[238]: Journal stopped Dec 13 01:16:44.318181 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:44.318237 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:44.318250 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:44.318260 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:44.318270 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:44.318282 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:44.318296 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:44.318305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:44.318314 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:44.318324 kernel: audit: type=1403 audit(1734052603.795:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:44.318335 systemd[1]: Successfully loaded SELinux policy in 34.217ms. Dec 13 01:16:44.318355 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.242ms. Dec 13 01:16:44.318367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:44.318379 systemd[1]: Detected virtualization kvm. Dec 13 01:16:44.318394 systemd[1]: Detected architecture arm64. Dec 13 01:16:44.318405 systemd[1]: Detected first boot. Dec 13 01:16:44.318415 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:44.318425 zram_generator::config[1059]: No configuration found. Dec 13 01:16:44.318437 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:44.318447 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:44.318488 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:16:44.318500 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:44.318513 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:44.318524 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:44.318535 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:44.318545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:44.318556 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:44.318566 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:44.318578 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:44.318589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:44.318600 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:44.318612 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:44.318623 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:44.318633 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:44.318644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:44.318654 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:16:44.318665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:44.318675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:44.318685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:44.318695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:44.318707 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:44.318718 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:44.318728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:44.318738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:44.318748 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:44.318760 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:44.318771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:44.318781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:44.318793 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:44.318803 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:44.318814 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:44.318824 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:44.318835 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:44.318845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:44.318864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:44.318875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:44.318885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:44.318900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:44.318911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:44.318928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:44.318940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:44.318950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:44.318961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:44.318971 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:44.318981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:44.318992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:44.319007 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:16:44.319018 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:16:44.319029 kernel: fuse: init (API version 7.39) Dec 13 01:16:44.319040 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:44.319052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:44.319062 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:44.319072 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:44.319083 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:44.319095 kernel: loop: module loaded Dec 13 01:16:44.319106 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:44.319117 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:44.319127 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:44.319137 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:44.319148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:44.319158 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:44.319168 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:44.319198 systemd-journald[1146]: Collecting audit messages is disabled. Dec 13 01:16:44.319223 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:44.319235 systemd-journald[1146]: Journal started Dec 13 01:16:44.319255 systemd-journald[1146]: Runtime Journal (/run/log/journal/e7e541f08e714cd39f8f03276ff5f49d) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:16:44.320967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:44.321006 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:44.323521 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:44.324145 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:44.325280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:44.325437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:44.326701 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:44.326849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:44.327857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:44.328028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:44.329152 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:44.329303 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:44.330443 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:44.330657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:44.331753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:44.332881 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:44.334169 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:44.344869 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:44.353622 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:44.355443 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:44.356246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:44.358636 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:44.362569 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:44.363368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:44.364610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:44.365583 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:44.366583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:44.371219 systemd-journald[1146]: Time spent on flushing to /var/log/journal/e7e541f08e714cd39f8f03276ff5f49d is 11.351ms for 849 entries. Dec 13 01:16:44.371219 systemd-journald[1146]: System Journal (/var/log/journal/e7e541f08e714cd39f8f03276ff5f49d) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:16:44.387877 systemd-journald[1146]: Received client request to flush runtime journal. Dec 13 01:16:44.371780 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:44.378882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:44.381621 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:44.382550 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:44.383998 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:44.386733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:44.389711 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:44.391019 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:44.403751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:44.405473 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:16:44.405727 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:16:44.406509 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:16:44.409974 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:44.420650 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:44.444332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:44.455609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:44.467309 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Dec 13 01:16:44.467328 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Dec 13 01:16:44.470837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:44.779717 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:44.793693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:44.813654 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Dec 13 01:16:44.826403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:44.836998 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:44.844301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:44.856683 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Dec 13 01:16:44.863484 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1234) Dec 13 01:16:44.866467 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1234) Dec 13 01:16:44.871491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1227) Dec 13 01:16:44.891090 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:44.912975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:44.965228 systemd-networkd[1230]: lo: Link UP Dec 13 01:16:44.965243 systemd-networkd[1230]: lo: Gained carrier Dec 13 01:16:44.965924 systemd-networkd[1230]: Enumeration completed Dec 13 01:16:44.966352 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:44.966355 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:44.966926 systemd-networkd[1230]: eth0: Link UP Dec 13 01:16:44.966930 systemd-networkd[1230]: eth0: Gained carrier Dec 13 01:16:44.966941 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:44.970746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:44.972379 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:44.975164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:44.979640 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:44.981510 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:44.982226 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:44.997501 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:45.008070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:45.020842 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:45.022219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:45.030637 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:45.034818 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:45.063979 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:45.065173 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:45.066185 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:45.066213 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:45.067017 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:45.068784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:45.079629 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:45.081642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:45.082542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:45.083477 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:45.086484 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:45.089694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:45.093713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:45.100727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:45.106473 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:16:45.108772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:45.109422 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:45.118491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:45.152479 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:16:45.187593 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 01:16:45.231512 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:16:45.239493 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:16:45.243492 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:16:45.247198 (sd-merge)[1292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:16:45.247606 (sd-merge)[1292]: Merged extensions into '/usr'. Dec 13 01:16:45.251233 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:45.251250 systemd[1]: Reloading... Dec 13 01:16:45.292520 zram_generator::config[1320]: No configuration found. Dec 13 01:16:45.324217 ldconfig[1273]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:45.391207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:45.433120 systemd[1]: Reloading finished in 181 ms. Dec 13 01:16:45.448092 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:45.449274 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:45.462731 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:45.464401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:45.467397 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:45.467412 systemd[1]: Reloading... Dec 13 01:16:45.480028 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:45.480294 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:45.480948 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:45.481178 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Dec 13 01:16:45.481226 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Dec 13 01:16:45.483442 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:45.483468 systemd-tmpfiles[1362]: Skipping /boot Dec 13 01:16:45.492822 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:45.492838 systemd-tmpfiles[1362]: Skipping /boot Dec 13 01:16:45.501483 zram_generator::config[1387]: No configuration found. Dec 13 01:16:45.588849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:45.630777 systemd[1]: Reloading finished in 163 ms. Dec 13 01:16:45.646289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:45.667602 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:45.669839 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:45.672035 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:45.675618 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:45.679507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:45.686723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:45.698866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:45.700883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:45.708768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:45.712832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:45.713765 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:45.725603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:45.725753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:45.727131 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:45.728717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:45.728857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:45.729691 augenrules[1458]: No rules Dec 13 01:16:45.732058 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:45.733026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:45.734561 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:45.739654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:45.739956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:45.745670 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:45.746435 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:45.747060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:45.750803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:45.752041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:45.755733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:45.758701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:45.760526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:45.760634 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:45.761356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:45.762834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:45.762983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:45.766869 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:45.767052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:45.768663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:45.768847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:45.770660 systemd-resolved[1437]: Positive Trust Anchors: Dec 13 01:16:45.770677 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:45.770709 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:45.773025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:45.776735 systemd-resolved[1437]: Defaulting to hostname 'linux'. Dec 13 01:16:45.778688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:45.780484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:45.782204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:45.784114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:45.785007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:45.785135 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:45.785782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:45.787226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:45.787369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:45.788761 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:45.788908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:45.790205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:45.790339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:45.791993 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:45.792172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:45.795766 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:45.799023 systemd[1]: Reached target network.target - Network. Dec 13 01:16:45.799747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:45.800626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:45.800674 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:45.809608 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:16:45.851230 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:16:45.851996 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:16:45.852046 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2024-12-13 01:16:45.743504 UTC. Dec 13 01:16:45.852616 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:45.853489 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:45.854406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:45.855400 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:45.856400 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:45.856433 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:45.857162 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:45.858087 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:45.859090 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:45.860047 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:45.862516 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:45.864696 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:45.866404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:45.879418 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:45.880306 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:45.881021 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:45.881816 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:16:45.881866 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:45.881885 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:45.882975 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:45.884744 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:45.886450 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:45.890365 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:45.891292 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:45.893615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:45.896381 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:45.903699 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:45.906154 jq[1510]: false Dec 13 01:16:45.915306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:45.921824 extend-filesystems[1512]: Found loop3 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found loop4 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found loop5 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda1 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda2 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda3 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found usr Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda4 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda6 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda7 Dec 13 01:16:45.922697 extend-filesystems[1512]: Found vda9 Dec 13 01:16:45.922697 extend-filesystems[1512]: Checking size of /dev/vda9 Dec 13 01:16:45.923655 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:45.923921 dbus-daemon[1509]: [system] SELinux support is enabled Dec 13 01:16:45.929810 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:16:45.931056 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:45.936606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:45.938117 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:45.944492 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:45.946248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:45.949578 jq[1532]: true Dec 13 01:16:45.946571 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:45.946796 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:45.950178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:45.952015 extend-filesystems[1512]: Resized partition /dev/vda9 Dec 13 01:16:45.960814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1238) Dec 13 01:16:45.950678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:45.967724 extend-filesystems[1540]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:45.979918 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:16:45.990832 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:16:45.992182 systemd-logind[1523]: New seat seat0. Dec 13 01:16:45.995007 update_engine[1530]: I20241213 01:16:45.991102 1530 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:45.995007 update_engine[1530]: I20241213 01:16:45.994632 1530 update_check_scheduler.cc:74] Next update check in 4m57s Dec 13 01:16:45.996187 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:45.998375 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:46.001025 jq[1542]: true Dec 13 01:16:46.003793 tar[1538]: linux-arm64/helm Dec 13 01:16:46.007341 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:46.009914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:46.010061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:46.011063 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:46.011181 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:46.013221 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:46.014125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:46.026477 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:16:46.056206 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:16:46.056206 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:16:46.056206 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:16:46.056190 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:46.061840 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Dec 13 01:16:46.056505 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:46.066918 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:46.068719 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:46.074562 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:16:46.078705 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:46.147547 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:46.166011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:46.175698 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:46.183053 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:46.183315 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:46.187720 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:46.202473 containerd[1544]: time="2024-12-13T01:16:46.201136326Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:46.204932 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:46.214714 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:46.216656 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:16:46.217594 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:46.228423 containerd[1544]: time="2024-12-13T01:16:46.228386535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.229968 containerd[1544]: time="2024-12-13T01:16:46.229902827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:46.229968 containerd[1544]: time="2024-12-13T01:16:46.229963710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:46.230046 containerd[1544]: time="2024-12-13T01:16:46.229987188Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:46.230167 containerd[1544]: time="2024-12-13T01:16:46.230143047Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:46.230199 containerd[1544]: time="2024-12-13T01:16:46.230171220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.230268 containerd[1544]: time="2024-12-13T01:16:46.230227014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:46.230268 containerd[1544]: time="2024-12-13T01:16:46.230245243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.230580 containerd[1544]: time="2024-12-13T01:16:46.230546347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:46.230709 containerd[1544]: time="2024-12-13T01:16:46.230686107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.230882 containerd[1544]: time="2024-12-13T01:16:46.230767983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:46.231001 containerd[1544]: time="2024-12-13T01:16:46.230939231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.231203 containerd[1544]: time="2024-12-13T01:16:46.231182568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.231553 containerd[1544]: time="2024-12-13T01:16:46.231531337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:46.231994 containerd[1544]: time="2024-12-13T01:16:46.231865862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:46.231994 containerd[1544]: time="2024-12-13T01:16:46.231886933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:46.232170 containerd[1544]: time="2024-12-13T01:16:46.232103321Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:46.232290 containerd[1544]: time="2024-12-13T01:16:46.232266164Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235598037Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235643374Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235664523Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235686699Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235704297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:46.235977 containerd[1544]: time="2024-12-13T01:16:46.235831115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:46.237126 containerd[1544]: time="2024-12-13T01:16:46.237101505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:46.237358 containerd[1544]: time="2024-12-13T01:16:46.237336990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:46.237503 containerd[1544]: time="2024-12-13T01:16:46.237485747Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:46.237615 containerd[1544]: time="2024-12-13T01:16:46.237556653Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:46.237686 containerd[1544]: time="2024-12-13T01:16:46.237671791Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.237799 containerd[1544]: time="2024-12-13T01:16:46.237726993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.237799 containerd[1544]: time="2024-12-13T01:16:46.237743960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.237881 containerd[1544]: time="2024-12-13T01:16:46.237864070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.237930912Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.237949378Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.237968358Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.237981655Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238001502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238014563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238026913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238039382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238050549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238070988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238083141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238094860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238106737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.238864 containerd[1544]: time="2024-12-13T01:16:46.238120666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238131398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238142368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238153889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238173539Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238193663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238205066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238215128Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238315154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238331253Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238344669Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238357058Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238365976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238379471Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:46.239125 containerd[1544]: time="2024-12-13T01:16:46.238388980Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:46.239345 containerd[1544]: time="2024-12-13T01:16:46.238398450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:46.239384 containerd[1544]: time="2024-12-13T01:16:46.238809602Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:46.239561 containerd[1544]: time="2024-12-13T01:16:46.239544468Z" level=info msg="Connect containerd service" Dec 13 01:16:46.239655 containerd[1544]: time="2024-12-13T01:16:46.239637668Z" level=info msg="using legacy CRI server" Dec 13 01:16:46.239702 containerd[1544]: time="2024-12-13T01:16:46.239690423Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:46.239832 containerd[1544]: time="2024-12-13T01:16:46.239816215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:46.240440 containerd[1544]: time="2024-12-13T01:16:46.240411242Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:46.240749 containerd[1544]: time="2024-12-13T01:16:46.240696523Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:46.240749 containerd[1544]: time="2024-12-13T01:16:46.240748095Z" level=info msg="Start recovering state" Dec 13 01:16:46.240951 containerd[1544]: time="2024-12-13T01:16:46.240805349Z" level=info msg="Start event monitor" Dec 13 01:16:46.240951 containerd[1544]: time="2024-12-13T01:16:46.240820106Z" level=info msg="Start snapshots syncer" Dec 13 01:16:46.240951 containerd[1544]: time="2024-12-13T01:16:46.240831312Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:46.240951 containerd[1544]: time="2024-12-13T01:16:46.240838651Z" level=info msg="Start streaming server" Dec 13 01:16:46.241173 containerd[1544]: time="2024-12-13T01:16:46.241154907Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:46.241277 containerd[1544]: time="2024-12-13T01:16:46.241262943Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:46.241377 containerd[1544]: time="2024-12-13T01:16:46.241364587Z" level=info msg="containerd successfully booted in 0.042294s" Dec 13 01:16:46.241469 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:46.369810 tar[1538]: linux-arm64/LICENSE Dec 13 01:16:46.369910 tar[1538]: linux-arm64/README.md Dec 13 01:16:46.385091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:46.606697 systemd-networkd[1230]: eth0: Gained IPv6LL Dec 13 01:16:46.609124 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:46.610568 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:46.622713 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:16:46.624923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:46.626899 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:46.641384 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:16:46.642697 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:16:46.644393 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:46.646872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:47.076581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:47.077725 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:47.078761 systemd[1]: Startup finished in 5.631s (kernel) + 3.321s (userspace) = 8.952s. Dec 13 01:16:47.080418 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:47.552872 kubelet[1645]: E1213 01:16:47.552710 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:47.555260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:47.555478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:51.849294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:16:51.857779 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Dec 13 01:16:51.898767 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:51.900527 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:51.907440 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:16:51.917670 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:16:51.919108 systemd-logind[1523]: New session 1 of user core. Dec 13 01:16:51.926491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:16:51.928491 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:16:51.935058 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:16:52.004999 systemd[1665]: Queued start job for default target default.target. Dec 13 01:16:52.005346 systemd[1665]: Created slice app.slice - User Application Slice. Dec 13 01:16:52.005369 systemd[1665]: Reached target paths.target - Paths. Dec 13 01:16:52.005388 systemd[1665]: Reached target timers.target - Timers. Dec 13 01:16:52.013589 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:16:52.018956 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:16:52.019019 systemd[1665]: Reached target sockets.target - Sockets. Dec 13 01:16:52.019031 systemd[1665]: Reached target basic.target - Basic System. Dec 13 01:16:52.019069 systemd[1665]: Reached target default.target - Main User Target. Dec 13 01:16:52.019095 systemd[1665]: Startup finished in 78ms. Dec 13 01:16:52.019476 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:16:52.020811 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:16:52.080694 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:51614.service - OpenSSH per-connection server daemon (10.0.0.1:51614). Dec 13 01:16:52.112302 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 51614 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.113523 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.117133 systemd-logind[1523]: New session 2 of user core. Dec 13 01:16:52.126829 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:16:52.179966 sshd[1677]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:52.192691 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618). Dec 13 01:16:52.193069 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:51614.service: Deactivated successfully. Dec 13 01:16:52.194787 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:16:52.195350 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:16:52.196769 systemd-logind[1523]: Removed session 2. Dec 13 01:16:52.221803 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.222976 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.226393 systemd-logind[1523]: New session 3 of user core. Dec 13 01:16:52.235797 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:16:52.282632 sshd[1682]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:52.291793 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:51622.service - OpenSSH per-connection server daemon (10.0.0.1:51622). Dec 13 01:16:52.292173 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:51618.service: Deactivated successfully. Dec 13 01:16:52.293907 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:16:52.294424 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:16:52.295620 systemd-logind[1523]: Removed session 3. Dec 13 01:16:52.320845 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 51622 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.321994 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.325877 systemd-logind[1523]: New session 4 of user core. Dec 13 01:16:52.339799 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:16:52.391499 sshd[1690]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:52.407696 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:51624.service - OpenSSH per-connection server daemon (10.0.0.1:51624). Dec 13 01:16:52.408081 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:51622.service: Deactivated successfully. Dec 13 01:16:52.409772 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:16:52.410311 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:16:52.411689 systemd-logind[1523]: Removed session 4. Dec 13 01:16:52.436847 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 51624 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.438132 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.442110 systemd-logind[1523]: New session 5 of user core. Dec 13 01:16:52.452738 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:16:52.519998 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:16:52.522028 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:52.542252 sudo[1705]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:52.544290 sshd[1698]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:52.554768 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:39294.service - OpenSSH per-connection server daemon (10.0.0.1:39294). Dec 13 01:16:52.555182 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:51624.service: Deactivated successfully. Dec 13 01:16:52.557627 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:16:52.557954 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:16:52.560643 systemd-logind[1523]: Removed session 5. Dec 13 01:16:52.587195 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 39294 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.588500 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.592234 systemd-logind[1523]: New session 6 of user core. Dec 13 01:16:52.603737 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:16:52.654217 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:16:52.654529 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:52.658163 sudo[1715]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:52.662904 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:16:52.663164 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:52.680752 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:52.683119 auditctl[1718]: No rules Dec 13 01:16:52.684056 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:16:52.684300 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:52.686094 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:52.712453 augenrules[1737]: No rules Dec 13 01:16:52.713741 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:52.716220 sudo[1714]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:52.718055 sshd[1707]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:52.726722 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:39310.service - OpenSSH per-connection server daemon (10.0.0.1:39310). Dec 13 01:16:52.727112 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:39294.service: Deactivated successfully. Dec 13 01:16:52.729044 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:16:52.729637 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:16:52.730773 systemd-logind[1523]: Removed session 6. Dec 13 01:16:52.758169 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 39310 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:16:52.759277 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:52.763220 systemd-logind[1523]: New session 7 of user core. Dec 13 01:16:52.770771 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:16:52.822015 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:16:52.822735 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:53.141684 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:16:53.142116 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:16:53.392870 dockerd[1769]: time="2024-12-13T01:16:53.391653482Z" level=info msg="Starting up" Dec 13 01:16:53.633109 dockerd[1769]: time="2024-12-13T01:16:53.633055943Z" level=info msg="Loading containers: start." Dec 13 01:16:53.720495 kernel: Initializing XFRM netlink socket Dec 13 01:16:53.785802 systemd-networkd[1230]: docker0: Link UP Dec 13 01:16:53.812990 dockerd[1769]: time="2024-12-13T01:16:53.812927601Z" level=info msg="Loading containers: done." Dec 13 01:16:53.829613 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1403847695-merged.mount: Deactivated successfully. Dec 13 01:16:53.831245 dockerd[1769]: time="2024-12-13T01:16:53.831196870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:16:53.831347 dockerd[1769]: time="2024-12-13T01:16:53.831320648Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:16:53.831444 dockerd[1769]: time="2024-12-13T01:16:53.831421628Z" level=info msg="Daemon has completed initialization" Dec 13 01:16:53.862719 dockerd[1769]: time="2024-12-13T01:16:53.862581658Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:16:53.862888 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:16:54.660560 containerd[1544]: time="2024-12-13T01:16:54.660509227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:16:55.416206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908949247.mount: Deactivated successfully. Dec 13 01:16:57.127806 containerd[1544]: time="2024-12-13T01:16:57.127745905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:57.128324 containerd[1544]: time="2024-12-13T01:16:57.128287531Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Dec 13 01:16:57.129145 containerd[1544]: time="2024-12-13T01:16:57.129020677Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:57.132615 containerd[1544]: time="2024-12-13T01:16:57.132049560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:57.134295 containerd[1544]: time="2024-12-13T01:16:57.134265746Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.473711314s" Dec 13 01:16:57.134345 containerd[1544]: time="2024-12-13T01:16:57.134302750Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:16:57.153492 containerd[1544]: time="2024-12-13T01:16:57.153432975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:16:57.805783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:57.815640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:57.904676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:57.908149 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:57.949495 kubelet[1995]: E1213 01:16:57.949424 1995 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:57.953674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:57.953854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:58.984487 containerd[1544]: time="2024-12-13T01:16:58.984423484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.985346 containerd[1544]: time="2024-12-13T01:16:58.984904488Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Dec 13 01:16:58.985923 containerd[1544]: time="2024-12-13T01:16:58.985887518Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.988987 containerd[1544]: time="2024-12-13T01:16:58.988957237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.990086 containerd[1544]: time="2024-12-13T01:16:58.990051802Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.836563875s" Dec 13 01:16:58.990086 containerd[1544]: time="2024-12-13T01:16:58.990088621Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:16:59.008161 containerd[1544]: time="2024-12-13T01:16:59.008132648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:17:00.585815 containerd[1544]: time="2024-12-13T01:17:00.585767700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:00.586772 containerd[1544]: time="2024-12-13T01:17:00.586564072Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Dec 13 01:17:00.587485 containerd[1544]: time="2024-12-13T01:17:00.587441355Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:00.590482 containerd[1544]: time="2024-12-13T01:17:00.590429459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:00.591677 containerd[1544]: time="2024-12-13T01:17:00.591647229Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.583477187s" Dec 13 01:17:00.591720 containerd[1544]: time="2024-12-13T01:17:00.591682156Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:17:00.609233 containerd[1544]: time="2024-12-13T01:17:00.609203028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:17:01.867961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265184799.mount: Deactivated successfully. Dec 13 01:17:02.272586 containerd[1544]: time="2024-12-13T01:17:02.272190437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:02.273150 containerd[1544]: time="2024-12-13T01:17:02.272822664Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Dec 13 01:17:02.273486 containerd[1544]: time="2024-12-13T01:17:02.273440714Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:02.275412 containerd[1544]: time="2024-12-13T01:17:02.275358920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:02.276582 containerd[1544]: time="2024-12-13T01:17:02.276050652Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.666809737s" Dec 13 01:17:02.276582 containerd[1544]: time="2024-12-13T01:17:02.276086674Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:17:02.295129 containerd[1544]: time="2024-12-13T01:17:02.295067819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:17:02.947641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899207978.mount: Deactivated successfully. Dec 13 01:17:03.654742 containerd[1544]: time="2024-12-13T01:17:03.654679438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:03.655223 containerd[1544]: time="2024-12-13T01:17:03.655182932Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:17:03.656076 containerd[1544]: time="2024-12-13T01:17:03.656047361Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:03.659390 containerd[1544]: time="2024-12-13T01:17:03.659336550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:03.660602 containerd[1544]: time="2024-12-13T01:17:03.660570061Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.365458948s" Dec 13 01:17:03.660659 containerd[1544]: time="2024-12-13T01:17:03.660608567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:17:03.680169 containerd[1544]: time="2024-12-13T01:17:03.679960000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:17:04.099204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963595686.mount: Deactivated successfully. Dec 13 01:17:04.104994 containerd[1544]: time="2024-12-13T01:17:04.104939381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:04.105374 containerd[1544]: time="2024-12-13T01:17:04.105347001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:17:04.106318 containerd[1544]: time="2024-12-13T01:17:04.106266993Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:04.110250 containerd[1544]: time="2024-12-13T01:17:04.108977869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:04.110250 containerd[1544]: time="2024-12-13T01:17:04.109848561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 429.852091ms" Dec 13 01:17:04.110250 containerd[1544]: time="2024-12-13T01:17:04.109876926Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:17:04.128747 containerd[1544]: time="2024-12-13T01:17:04.128522021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:17:04.767999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366615084.mount: Deactivated successfully. Dec 13 01:17:07.393709 containerd[1544]: time="2024-12-13T01:17:07.393654344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.394923 containerd[1544]: time="2024-12-13T01:17:07.394857556Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Dec 13 01:17:07.395501 containerd[1544]: time="2024-12-13T01:17:07.395473850Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.399002 containerd[1544]: time="2024-12-13T01:17:07.398937125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.400671 containerd[1544]: time="2024-12-13T01:17:07.400518747Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.271953853s" Dec 13 01:17:07.400671 containerd[1544]: time="2024-12-13T01:17:07.400559873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:17:08.204108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:17:08.213667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:08.370001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:08.374355 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:17:08.411032 kubelet[2230]: E1213 01:17:08.410980 2230 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:17:08.413827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:17:08.414017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:14.683659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:14.695679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:14.712954 systemd[1]: Reloading requested from client PID 2248 ('systemctl') (unit session-7.scope)... Dec 13 01:17:14.712969 systemd[1]: Reloading... Dec 13 01:17:14.774700 zram_generator::config[2287]: No configuration found. Dec 13 01:17:15.016256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:15.064112 systemd[1]: Reloading finished in 350 ms. Dec 13 01:17:15.101203 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:17:15.101263 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:17:15.101505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:15.103570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:15.189528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:15.193273 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:15.237211 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:15.237211 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:15.237211 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:15.238013 kubelet[2345]: I1213 01:17:15.237948 2345 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:15.880966 kubelet[2345]: I1213 01:17:15.880927 2345 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:15.880966 kubelet[2345]: I1213 01:17:15.880959 2345 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:15.881179 kubelet[2345]: I1213 01:17:15.881150 2345 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:15.924006 kubelet[2345]: I1213 01:17:15.923963 2345 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:15.924123 kubelet[2345]: E1213 01:17:15.924045 2345 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.931761 kubelet[2345]: I1213 01:17:15.931721 2345 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:15.932064 kubelet[2345]: I1213 01:17:15.932050 2345 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:15.932271 kubelet[2345]: I1213 01:17:15.932241 2345 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:15.932271 kubelet[2345]: I1213 01:17:15.932265 2345 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:15.932271 kubelet[2345]: I1213 01:17:15.932274 2345 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:15.932903 kubelet[2345]: I1213 01:17:15.932866 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:15.934935 kubelet[2345]: I1213 01:17:15.934911 2345 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:15.934935 kubelet[2345]: I1213 01:17:15.934935 2345 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:15.934998 kubelet[2345]: I1213 01:17:15.934955 2345 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:15.934998 kubelet[2345]: I1213 01:17:15.934965 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:15.936426 kubelet[2345]: W1213 01:17:15.936357 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.936426 kubelet[2345]: E1213 01:17:15.936422 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.936709 kubelet[2345]: W1213 01:17:15.936676 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.936811 kubelet[2345]: E1213 01:17:15.936801 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.936870 kubelet[2345]: I1213 01:17:15.936746 2345 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:15.937367 kubelet[2345]: I1213 01:17:15.937349 2345 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:15.937604 kubelet[2345]: W1213 01:17:15.937591 2345 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:17:15.938621 kubelet[2345]: I1213 01:17:15.938603 2345 server.go:1256] "Started kubelet" Dec 13 01:17:15.940238 kubelet[2345]: I1213 01:17:15.940208 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:15.940470 kubelet[2345]: I1213 01:17:15.940438 2345 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:15.940536 kubelet[2345]: I1213 01:17:15.940523 2345 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:15.944708 kubelet[2345]: I1213 01:17:15.943339 2345 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:15.944708 kubelet[2345]: I1213 01:17:15.944557 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:15.945803 kubelet[2345]: I1213 01:17:15.945773 2345 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:15.946174 kubelet[2345]: I1213 01:17:15.946146 2345 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:15.946230 kubelet[2345]: I1213 01:17:15.946220 2345 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:15.946640 kubelet[2345]: W1213 01:17:15.946596 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.946640 kubelet[2345]: E1213 01:17:15.946642 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.946721 kubelet[2345]: E1213 01:17:15.946704 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Dec 13 01:17:15.948988 kubelet[2345]: I1213 01:17:15.948958 2345 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:15.949064 kubelet[2345]: I1213 01:17:15.949044 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:15.950299 kubelet[2345]: E1213 01:17:15.950256 2345 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097af5af2e123 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:15.938578723 +0000 UTC m=+0.741960626,LastTimestamp:2024-12-13 01:17:15.938578723 +0000 UTC m=+0.741960626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:17:15.950779 kubelet[2345]: E1213 01:17:15.950721 2345 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:15.950943 kubelet[2345]: I1213 01:17:15.950928 2345 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:15.960679 kubelet[2345]: I1213 01:17:15.960645 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:15.961528 kubelet[2345]: I1213 01:17:15.961510 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:15.961528 kubelet[2345]: I1213 01:17:15.961529 2345 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:15.961607 kubelet[2345]: I1213 01:17:15.961546 2345 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:15.961607 kubelet[2345]: E1213 01:17:15.961595 2345 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:15.966854 kubelet[2345]: W1213 01:17:15.966796 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.966910 kubelet[2345]: E1213 01:17:15.966867 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:15.968919 kubelet[2345]: I1213 01:17:15.968900 2345 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:15.969036 kubelet[2345]: I1213 01:17:15.969027 2345 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:15.969107 kubelet[2345]: I1213 01:17:15.969099 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:16.033894 kubelet[2345]: I1213 01:17:16.033863 2345 policy_none.go:49] "None policy: Start" Dec 13 01:17:16.034734 kubelet[2345]: I1213 01:17:16.034714 2345 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:16.034794 kubelet[2345]: I1213 01:17:16.034761 2345 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:16.038699 kubelet[2345]: I1213 01:17:16.038671 2345 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:16.038950 kubelet[2345]: I1213 01:17:16.038922 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:16.040401 kubelet[2345]: E1213 01:17:16.040388 2345 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:17:16.047344 kubelet[2345]: I1213 01:17:16.047324 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:16.047837 kubelet[2345]: E1213 01:17:16.047795 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Dec 13 01:17:16.062125 kubelet[2345]: I1213 01:17:16.062034 2345 topology_manager.go:215] "Topology Admit Handler" podUID="3ebb8afb2713b92ba6c215f3f88d2b87" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:16.063032 kubelet[2345]: I1213 01:17:16.063003 2345 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:16.064239 kubelet[2345]: I1213 01:17:16.063868 2345 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:16.147348 kubelet[2345]: E1213 01:17:16.147229 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Dec 13 01:17:16.247709 kubelet[2345]: I1213 01:17:16.247671 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:16.248052 kubelet[2345]: I1213 01:17:16.247729 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:16.248052 kubelet[2345]: I1213 01:17:16.247758 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:16.248052 kubelet[2345]: I1213 01:17:16.247778 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:16.248052 kubelet[2345]: I1213 01:17:16.247798 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:16.248052 kubelet[2345]: I1213 01:17:16.247849 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:16.248195 kubelet[2345]: I1213 01:17:16.247899 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:16.248195 kubelet[2345]: I1213 01:17:16.247943 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:16.248195 kubelet[2345]: I1213 01:17:16.247989 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:16.248999 kubelet[2345]: I1213 01:17:16.248961 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:16.249266 kubelet[2345]: E1213 01:17:16.249251 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Dec 13 01:17:16.367822 kubelet[2345]: E1213 01:17:16.367781 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:16.368119 kubelet[2345]: E1213 01:17:16.367893 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:16.368119 kubelet[2345]: E1213 01:17:16.367802 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:16.368504 containerd[1544]: time="2024-12-13T01:17:16.368444075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:16.368812 containerd[1544]: time="2024-12-13T01:17:16.368453153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:16.368812 containerd[1544]: time="2024-12-13T01:17:16.368453553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ebb8afb2713b92ba6c215f3f88d2b87,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:16.548669 kubelet[2345]: E1213 01:17:16.548558 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Dec 13 01:17:16.651099 kubelet[2345]: I1213 01:17:16.651074 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:16.651402 kubelet[2345]: E1213 01:17:16.651385 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Dec 13 01:17:16.856835 kubelet[2345]: W1213 01:17:16.856728 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:16.856835 kubelet[2345]: E1213 01:17:16.856768 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:16.909207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253280012.mount: Deactivated successfully. Dec 13 01:17:16.923473 containerd[1544]: time="2024-12-13T01:17:16.923418646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:16.927440 containerd[1544]: time="2024-12-13T01:17:16.927395624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:17:16.928449 containerd[1544]: time="2024-12-13T01:17:16.928406695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:16.929521 containerd[1544]: time="2024-12-13T01:17:16.929450117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:16.929921 containerd[1544]: time="2024-12-13T01:17:16.929882331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:16.930701 containerd[1544]: time="2024-12-13T01:17:16.930660619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:16.931835 containerd[1544]: time="2024-12-13T01:17:16.931793739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:16.933423 containerd[1544]: time="2024-12-13T01:17:16.933380667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:16.936122 containerd[1544]: time="2024-12-13T01:17:16.935976867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.423818ms" Dec 13 01:17:16.937522 containerd[1544]: time="2024-12-13T01:17:16.937485095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.756849ms" Dec 13 01:17:16.953089 containerd[1544]: time="2024-12-13T01:17:16.953052653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.419785ms" Dec 13 01:17:16.981190 kubelet[2345]: W1213 01:17:16.981098 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:16.981190 kubelet[2345]: E1213 01:17:16.981164 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:17.027008 kubelet[2345]: W1213 01:17:17.026521 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:17.027008 kubelet[2345]: E1213 01:17:17.026582 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:17.108937 containerd[1544]: time="2024-12-13T01:17:17.108753052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:17.109102 containerd[1544]: time="2024-12-13T01:17:17.108809720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:17.109102 containerd[1544]: time="2024-12-13T01:17:17.108919097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.109102 containerd[1544]: time="2024-12-13T01:17:17.109021195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.109193 containerd[1544]: time="2024-12-13T01:17:17.108565853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:17.109193 containerd[1544]: time="2024-12-13T01:17:17.109117614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:17.109193 containerd[1544]: time="2024-12-13T01:17:17.109140369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.109542 containerd[1544]: time="2024-12-13T01:17:17.109438345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:17.109542 containerd[1544]: time="2024-12-13T01:17:17.109501051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:17.109542 containerd[1544]: time="2024-12-13T01:17:17.109516688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.109656 containerd[1544]: time="2024-12-13T01:17:17.109582273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.110024 containerd[1544]: time="2024-12-13T01:17:17.109941236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.137294 kubelet[2345]: W1213 01:17:17.137247 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:17.137294 kubelet[2345]: E1213 01:17:17.137300 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Dec 13 01:17:17.158701 containerd[1544]: time="2024-12-13T01:17:17.158656718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7691c657b1ec5d0829457d2b966bb5539b4d343c3c03461f47984e7862469e34\"" Dec 13 01:17:17.160352 containerd[1544]: time="2024-12-13T01:17:17.160205743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ebb8afb2713b92ba6c215f3f88d2b87,Namespace:kube-system,Attempt:0,} returns sandbox id \"e55bfa0556a358e5ca0d87a4123e4f4fd4dc4ffca2e27f37c4e3a0cb24215419\"" Dec 13 01:17:17.160708 kubelet[2345]: E1213 01:17:17.160685 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:17.160964 containerd[1544]: time="2024-12-13T01:17:17.160938865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2310e778c3da08d17a485ba108a858f58350f215129f01ac621fa92766b190b4\"" Dec 13 01:17:17.161568 kubelet[2345]: E1213 01:17:17.161548 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:17.162359 kubelet[2345]: E1213 01:17:17.162330 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:17.163911 containerd[1544]: time="2024-12-13T01:17:17.163882109Z" level=info msg="CreateContainer within sandbox \"e55bfa0556a358e5ca0d87a4123e4f4fd4dc4ffca2e27f37c4e3a0cb24215419\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:17:17.164080 containerd[1544]: time="2024-12-13T01:17:17.164059591Z" level=info msg="CreateContainer within sandbox \"7691c657b1ec5d0829457d2b966bb5539b4d343c3c03461f47984e7862469e34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:17:17.166155 containerd[1544]: time="2024-12-13T01:17:17.166122626Z" level=info msg="CreateContainer within sandbox \"2310e778c3da08d17a485ba108a858f58350f215129f01ac621fa92766b190b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:17:17.180381 containerd[1544]: time="2024-12-13T01:17:17.180311882Z" level=info msg="CreateContainer within sandbox \"e55bfa0556a358e5ca0d87a4123e4f4fd4dc4ffca2e27f37c4e3a0cb24215419\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a129747070a51acfefc4c613cec493c0672bcd01f405079dbbf8a5409fc6a77\"" Dec 13 01:17:17.181303 containerd[1544]: time="2024-12-13T01:17:17.181197971Z" level=info msg="StartContainer for \"5a129747070a51acfefc4c613cec493c0672bcd01f405079dbbf8a5409fc6a77\"" Dec 13 01:17:17.185196 containerd[1544]: time="2024-12-13T01:17:17.185089090Z" level=info msg="CreateContainer within sandbox \"7691c657b1ec5d0829457d2b966bb5539b4d343c3c03461f47984e7862469e34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"35b83e945c953fd5b85405845ede7ed4871393ea84b49891a93794d192d6126f\"" Dec 13 01:17:17.185513 containerd[1544]: time="2024-12-13T01:17:17.185491763Z" level=info msg="StartContainer for \"35b83e945c953fd5b85405845ede7ed4871393ea84b49891a93794d192d6126f\"" Dec 13 01:17:17.195705 containerd[1544]: time="2024-12-13T01:17:17.195653329Z" level=info msg="CreateContainer within sandbox \"2310e778c3da08d17a485ba108a858f58350f215129f01ac621fa92766b190b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"403092622f5db333fa20b6ad93207ce4c0dd8afd5de0c89075545b4c8a94d119\"" Dec 13 01:17:17.196219 containerd[1544]: time="2024-12-13T01:17:17.196188134Z" level=info msg="StartContainer for \"403092622f5db333fa20b6ad93207ce4c0dd8afd5de0c89075545b4c8a94d119\"" Dec 13 01:17:17.246132 containerd[1544]: time="2024-12-13T01:17:17.244111107Z" level=info msg="StartContainer for \"5a129747070a51acfefc4c613cec493c0672bcd01f405079dbbf8a5409fc6a77\" returns successfully" Dec 13 01:17:17.264626 containerd[1544]: time="2024-12-13T01:17:17.260573192Z" level=info msg="StartContainer for \"35b83e945c953fd5b85405845ede7ed4871393ea84b49891a93794d192d6126f\" returns successfully" Dec 13 01:17:17.293225 containerd[1544]: time="2024-12-13T01:17:17.292926247Z" level=info msg="StartContainer for \"403092622f5db333fa20b6ad93207ce4c0dd8afd5de0c89075545b4c8a94d119\" returns successfully" Dec 13 01:17:17.354175 kubelet[2345]: E1213 01:17:17.354118 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Dec 13 01:17:17.453466 kubelet[2345]: I1213 01:17:17.453357 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:17.973222 kubelet[2345]: E1213 01:17:17.973197 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:17.975106 kubelet[2345]: E1213 01:17:17.975042 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:17.981909 kubelet[2345]: E1213 01:17:17.981786 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:18.983876 kubelet[2345]: E1213 01:17:18.983730 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:18.984286 kubelet[2345]: E1213 01:17:18.984211 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:19.309716 kubelet[2345]: E1213 01:17:19.309615 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:17:19.361032 kubelet[2345]: I1213 01:17:19.360060 2345 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:19.371747 kubelet[2345]: E1213 01:17:19.371114 2345 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:19.472704 kubelet[2345]: E1213 01:17:19.472650 2345 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:19.573661 kubelet[2345]: E1213 01:17:19.573380 2345 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:19.673771 kubelet[2345]: E1213 01:17:19.673734 2345 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:19.774261 kubelet[2345]: E1213 01:17:19.774224 2345 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:19.937318 kubelet[2345]: I1213 01:17:19.937058 2345 apiserver.go:52] "Watching apiserver" Dec 13 01:17:19.946566 kubelet[2345]: I1213 01:17:19.946530 2345 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:21.812957 systemd[1]: Reloading requested from client PID 2623 ('systemctl') (unit session-7.scope)... Dec 13 01:17:21.813254 systemd[1]: Reloading... Dec 13 01:17:21.869489 zram_generator::config[2666]: No configuration found. Dec 13 01:17:21.957343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:22.011965 systemd[1]: Reloading finished in 198 ms. Dec 13 01:17:22.045297 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:22.058291 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:22.058645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:22.064772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:22.152554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:22.157885 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:22.213694 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:22.215269 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:22.215269 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:22.215269 kubelet[2714]: I1213 01:17:22.214115 2714 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:22.218040 kubelet[2714]: I1213 01:17:22.218010 2714 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:22.218040 kubelet[2714]: I1213 01:17:22.218035 2714 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:22.218217 kubelet[2714]: I1213 01:17:22.218201 2714 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:22.219837 kubelet[2714]: I1213 01:17:22.219805 2714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:22.223655 kubelet[2714]: I1213 01:17:22.223597 2714 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:22.228311 sudo[2729]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:17:22.228608 sudo[2729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:17:22.229356 kubelet[2714]: I1213 01:17:22.229333 2714 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:22.229849 kubelet[2714]: I1213 01:17:22.229829 2714 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:22.230132 kubelet[2714]: I1213 01:17:22.230112 2714 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:22.230292 kubelet[2714]: I1213 01:17:22.230277 2714 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:22.230342 kubelet[2714]: I1213 01:17:22.230335 2714 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:22.230407 kubelet[2714]: I1213 01:17:22.230399 2714 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:22.230623 kubelet[2714]: I1213 01:17:22.230602 2714 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:22.230713 kubelet[2714]: I1213 01:17:22.230701 2714 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:22.230796 kubelet[2714]: I1213 01:17:22.230787 2714 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:22.230848 kubelet[2714]: I1213 01:17:22.230840 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:22.232968 kubelet[2714]: I1213 01:17:22.231761 2714 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:22.232968 kubelet[2714]: I1213 01:17:22.231934 2714 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:22.232968 kubelet[2714]: I1213 01:17:22.232316 2714 server.go:1256] "Started kubelet" Dec 13 01:17:22.234069 kubelet[2714]: I1213 01:17:22.233550 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:22.234069 kubelet[2714]: I1213 01:17:22.233771 2714 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:22.234690 kubelet[2714]: I1213 01:17:22.234483 2714 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:22.237570 kubelet[2714]: I1213 01:17:22.237542 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:22.237748 kubelet[2714]: I1213 01:17:22.237728 2714 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:22.240477 kubelet[2714]: I1213 01:17:22.240181 2714 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:22.243034 kubelet[2714]: I1213 01:17:22.242992 2714 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:22.244526 kubelet[2714]: I1213 01:17:22.243157 2714 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:22.244526 kubelet[2714]: I1213 01:17:22.244068 2714 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:22.244526 kubelet[2714]: I1213 01:17:22.244310 2714 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:22.253180 kubelet[2714]: I1213 01:17:22.253152 2714 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:22.272871 kubelet[2714]: I1213 01:17:22.272796 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:22.275300 kubelet[2714]: I1213 01:17:22.274569 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:22.275300 kubelet[2714]: I1213 01:17:22.274591 2714 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:22.275300 kubelet[2714]: I1213 01:17:22.274610 2714 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:22.275300 kubelet[2714]: E1213 01:17:22.274657 2714 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:22.311493 kubelet[2714]: I1213 01:17:22.311449 2714 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:22.311493 kubelet[2714]: I1213 01:17:22.311490 2714 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:22.311493 kubelet[2714]: I1213 01:17:22.311510 2714 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:22.311749 kubelet[2714]: I1213 01:17:22.311660 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:22.311749 kubelet[2714]: I1213 01:17:22.311693 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:22.311749 kubelet[2714]: I1213 01:17:22.311701 2714 policy_none.go:49] "None policy: Start" Dec 13 01:17:22.312921 kubelet[2714]: I1213 01:17:22.312681 2714 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:22.312921 kubelet[2714]: I1213 01:17:22.312718 2714 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:22.312921 kubelet[2714]: I1213 01:17:22.312861 2714 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:22.314469 kubelet[2714]: I1213 01:17:22.313996 2714 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:22.314469 kubelet[2714]: I1213 01:17:22.314215 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:22.344071 kubelet[2714]: I1213 01:17:22.343974 2714 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:22.353359 kubelet[2714]: I1213 01:17:22.353319 2714 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:17:22.353476 kubelet[2714]: I1213 01:17:22.353414 2714 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:22.375488 kubelet[2714]: I1213 01:17:22.375369 2714 topology_manager.go:215] "Topology Admit Handler" podUID="3ebb8afb2713b92ba6c215f3f88d2b87" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:22.375488 kubelet[2714]: I1213 01:17:22.375483 2714 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:22.375664 kubelet[2714]: I1213 01:17:22.375549 2714 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:22.445859 kubelet[2714]: I1213 01:17:22.445824 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:22.445859 kubelet[2714]: I1213 01:17:22.445870 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:22.445994 kubelet[2714]: I1213 01:17:22.445892 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:22.445994 kubelet[2714]: I1213 01:17:22.445915 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:22.445994 kubelet[2714]: I1213 01:17:22.445935 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:22.446076 kubelet[2714]: I1213 01:17:22.446008 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:22.446076 kubelet[2714]: I1213 01:17:22.446046 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ebb8afb2713b92ba6c215f3f88d2b87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ebb8afb2713b92ba6c215f3f88d2b87\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:22.446076 kubelet[2714]: I1213 01:17:22.446070 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:22.446136 kubelet[2714]: I1213 01:17:22.446088 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:22.688488 kubelet[2714]: E1213 01:17:22.687988 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.688988 kubelet[2714]: E1213 01:17:22.688954 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.689088 kubelet[2714]: E1213 01:17:22.689069 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:22.693060 sudo[2729]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:23.231498 kubelet[2714]: I1213 01:17:23.231450 2714 apiserver.go:52] "Watching apiserver" Dec 13 01:17:23.245044 kubelet[2714]: I1213 01:17:23.245012 2714 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:23.283489 kubelet[2714]: E1213 01:17:23.282806 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.285805 kubelet[2714]: E1213 01:17:23.285782 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.288107 kubelet[2714]: E1213 01:17:23.288084 2714 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:23.288379 kubelet[2714]: E1213 01:17:23.288349 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.302420 kubelet[2714]: I1213 01:17:23.301744 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.301692424 podStartE2EDuration="1.301692424s" podCreationTimestamp="2024-12-13 01:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:23.301597667 +0000 UTC m=+1.137609755" watchObservedRunningTime="2024-12-13 01:17:23.301692424 +0000 UTC m=+1.137704512" Dec 13 01:17:23.316958 kubelet[2714]: I1213 01:17:23.316842 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.316805661 podStartE2EDuration="1.316805661s" podCreationTimestamp="2024-12-13 01:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:23.316222238 +0000 UTC m=+1.152234326" watchObservedRunningTime="2024-12-13 01:17:23.316805661 +0000 UTC m=+1.152817749" Dec 13 01:17:23.316958 kubelet[2714]: I1213 01:17:23.316945 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.316928977 podStartE2EDuration="1.316928977s" podCreationTimestamp="2024-12-13 01:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:23.309478716 +0000 UTC m=+1.145490804" watchObservedRunningTime="2024-12-13 01:17:23.316928977 +0000 UTC m=+1.152941065" Dec 13 01:17:24.248922 sudo[1750]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:24.252308 sshd[1743]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:24.256014 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:39310.service: Deactivated successfully. Dec 13 01:17:24.258041 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:17:24.258120 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:17:24.260171 systemd-logind[1523]: Removed session 7. Dec 13 01:17:24.284045 kubelet[2714]: E1213 01:17:24.283968 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.284378 kubelet[2714]: E1213 01:17:24.284200 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.284378 kubelet[2714]: E1213 01:17:24.284225 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:31.097196 kubelet[2714]: E1213 01:17:31.097098 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:31.134548 update_engine[1530]: I20241213 01:17:31.134254 1530 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:31.160630 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2796) Dec 13 01:17:31.193021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2795) Dec 13 01:17:31.293346 kubelet[2714]: E1213 01:17:31.293319 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.294184 kubelet[2714]: E1213 01:17:32.294147 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.945787 kubelet[2714]: E1213 01:17:32.945737 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:33.862313 kubelet[2714]: E1213 01:17:33.862225 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:36.182602 kubelet[2714]: I1213 01:17:36.182563 2714 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:36.185850 containerd[1544]: time="2024-12-13T01:17:36.185741871Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:36.186271 kubelet[2714]: I1213 01:17:36.186099 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:36.225112 kubelet[2714]: I1213 01:17:36.225066 2714 topology_manager.go:215] "Topology Admit Handler" podUID="79ebdaa2-6d60-46f6-b735-3b02d42fe04e" podNamespace="kube-system" podName="cilium-operator-5cc964979-4tvgk" Dec 13 01:17:36.253354 kubelet[2714]: I1213 01:17:36.253298 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-cilium-config-path\") pod \"cilium-operator-5cc964979-4tvgk\" (UID: \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\") " pod="kube-system/cilium-operator-5cc964979-4tvgk" Dec 13 01:17:36.253354 kubelet[2714]: I1213 01:17:36.253355 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jt2s\" (UniqueName: \"kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s\") pod \"cilium-operator-5cc964979-4tvgk\" (UID: \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\") " pod="kube-system/cilium-operator-5cc964979-4tvgk" Dec 13 01:17:36.266912 kubelet[2714]: I1213 01:17:36.265233 2714 topology_manager.go:215] "Topology Admit Handler" podUID="08dac392-5276-42e9-8374-665a19ddbaed" podNamespace="kube-system" podName="kube-proxy-ccnxp" Dec 13 01:17:36.272916 kubelet[2714]: I1213 01:17:36.272887 2714 topology_manager.go:215] "Topology Admit Handler" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" podNamespace="kube-system" podName="cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353822 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-run\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353870 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-bpf-maps\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353893 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bmt\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353915 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-lib-modules\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353935 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-hubble-tls\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.353960 kubelet[2714]: I1213 01:17:36.353954 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cni-path\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354218 kubelet[2714]: I1213 01:17:36.353976 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08dac392-5276-42e9-8374-665a19ddbaed-kube-proxy\") pod \"kube-proxy-ccnxp\" (UID: \"08dac392-5276-42e9-8374-665a19ddbaed\") " pod="kube-system/kube-proxy-ccnxp" Dec 13 01:17:36.354218 kubelet[2714]: I1213 01:17:36.353998 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-cgroup\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354218 kubelet[2714]: I1213 01:17:36.354081 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08dac392-5276-42e9-8374-665a19ddbaed-lib-modules\") pod \"kube-proxy-ccnxp\" (UID: \"08dac392-5276-42e9-8374-665a19ddbaed\") " pod="kube-system/kube-proxy-ccnxp" Dec 13 01:17:36.354218 kubelet[2714]: I1213 01:17:36.354121 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3906de88-90ad-43fc-95db-0447f3b111bf-clustermesh-secrets\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354298 kubelet[2714]: I1213 01:17:36.354244 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-kernel\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354298 kubelet[2714]: I1213 01:17:36.354270 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08dac392-5276-42e9-8374-665a19ddbaed-xtables-lock\") pod \"kube-proxy-ccnxp\" (UID: \"08dac392-5276-42e9-8374-665a19ddbaed\") " pod="kube-system/kube-proxy-ccnxp" Dec 13 01:17:36.354298 kubelet[2714]: I1213 01:17:36.354290 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-xtables-lock\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354367 kubelet[2714]: I1213 01:17:36.354358 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cmf\" (UniqueName: \"kubernetes.io/projected/08dac392-5276-42e9-8374-665a19ddbaed-kube-api-access-x2cmf\") pod \"kube-proxy-ccnxp\" (UID: \"08dac392-5276-42e9-8374-665a19ddbaed\") " pod="kube-system/kube-proxy-ccnxp" Dec 13 01:17:36.354394 kubelet[2714]: I1213 01:17:36.354389 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-etc-cni-netd\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354501 kubelet[2714]: I1213 01:17:36.354412 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-config-path\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354788 kubelet[2714]: I1213 01:17:36.354451 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-hostproc\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.354829 kubelet[2714]: I1213 01:17:36.354808 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-net\") pod \"cilium-gr7tf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " pod="kube-system/cilium-gr7tf" Dec 13 01:17:36.362167 kubelet[2714]: E1213 01:17:36.362014 2714 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.364027 kubelet[2714]: E1213 01:17:36.363927 2714 projected.go:200] Error preparing data for projected volume kube-api-access-9jt2s for pod kube-system/cilium-operator-5cc964979-4tvgk: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.364027 kubelet[2714]: E1213 01:17:36.364025 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s podName:79ebdaa2-6d60-46f6-b735-3b02d42fe04e nodeName:}" failed. No retries permitted until 2024-12-13 01:17:36.864001996 +0000 UTC m=+14.700014084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jt2s" (UniqueName: "kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s") pod "cilium-operator-5cc964979-4tvgk" (UID: "79ebdaa2-6d60-46f6-b735-3b02d42fe04e") : configmap "kube-root-ca.crt" not found Dec 13 01:17:36.464821 kubelet[2714]: E1213 01:17:36.463324 2714 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.464821 kubelet[2714]: E1213 01:17:36.463365 2714 projected.go:200] Error preparing data for projected volume kube-api-access-x2cmf for pod kube-system/kube-proxy-ccnxp: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.464821 kubelet[2714]: E1213 01:17:36.463411 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08dac392-5276-42e9-8374-665a19ddbaed-kube-api-access-x2cmf podName:08dac392-5276-42e9-8374-665a19ddbaed nodeName:}" failed. No retries permitted until 2024-12-13 01:17:36.963392144 +0000 UTC m=+14.799404192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2cmf" (UniqueName: "kubernetes.io/projected/08dac392-5276-42e9-8374-665a19ddbaed-kube-api-access-x2cmf") pod "kube-proxy-ccnxp" (UID: "08dac392-5276-42e9-8374-665a19ddbaed") : configmap "kube-root-ca.crt" not found Dec 13 01:17:36.468080 kubelet[2714]: E1213 01:17:36.468052 2714 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.468080 kubelet[2714]: E1213 01:17:36.468079 2714 projected.go:200] Error preparing data for projected volume kube-api-access-m9bmt for pod kube-system/cilium-gr7tf: configmap "kube-root-ca.crt" not found Dec 13 01:17:36.468205 kubelet[2714]: E1213 01:17:36.468119 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt podName:3906de88-90ad-43fc-95db-0447f3b111bf nodeName:}" failed. No retries permitted until 2024-12-13 01:17:36.968104594 +0000 UTC m=+14.804116682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m9bmt" (UniqueName: "kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt") pod "cilium-gr7tf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf") : configmap "kube-root-ca.crt" not found Dec 13 01:17:37.127780 kubelet[2714]: E1213 01:17:37.127748 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.128784 containerd[1544]: time="2024-12-13T01:17:37.128245053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4tvgk,Uid:79ebdaa2-6d60-46f6-b735-3b02d42fe04e,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:37.150961 containerd[1544]: time="2024-12-13T01:17:37.150712012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:37.150961 containerd[1544]: time="2024-12-13T01:17:37.150768211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:37.150961 containerd[1544]: time="2024-12-13T01:17:37.150793770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.151138 containerd[1544]: time="2024-12-13T01:17:37.150984728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.169815 kubelet[2714]: E1213 01:17:37.169786 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.171036 containerd[1544]: time="2024-12-13T01:17:37.171002401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccnxp,Uid:08dac392-5276-42e9-8374-665a19ddbaed,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:37.179035 kubelet[2714]: E1213 01:17:37.178806 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.189394 containerd[1544]: time="2024-12-13T01:17:37.189326259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gr7tf,Uid:3906de88-90ad-43fc-95db-0447f3b111bf,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:37.207394 containerd[1544]: time="2024-12-13T01:17:37.207345120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4tvgk,Uid:79ebdaa2-6d60-46f6-b735-3b02d42fe04e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\"" Dec 13 01:17:37.208925 kubelet[2714]: E1213 01:17:37.208903 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.216255 containerd[1544]: time="2024-12-13T01:17:37.216183594Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:17:37.232870 containerd[1544]: time="2024-12-13T01:17:37.232712957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:37.232870 containerd[1544]: time="2024-12-13T01:17:37.232787356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:37.232870 containerd[1544]: time="2024-12-13T01:17:37.232798996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.233082 containerd[1544]: time="2024-12-13T01:17:37.232897794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.244905 containerd[1544]: time="2024-12-13T01:17:37.244812744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:37.244905 containerd[1544]: time="2024-12-13T01:17:37.244864343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:37.244905 containerd[1544]: time="2024-12-13T01:17:37.244874863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.245106 containerd[1544]: time="2024-12-13T01:17:37.244960302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:37.268662 containerd[1544]: time="2024-12-13T01:17:37.268592283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gr7tf,Uid:3906de88-90ad-43fc-95db-0447f3b111bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\"" Dec 13 01:17:37.269263 kubelet[2714]: E1213 01:17:37.269241 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.280852 containerd[1544]: time="2024-12-13T01:17:37.280814788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccnxp,Uid:08dac392-5276-42e9-8374-665a19ddbaed,Namespace:kube-system,Attempt:0,} returns sandbox id \"17821af03002b3d910e571bf4d2f93486bbb21f438f7bcbb61dc377c4a155f52\"" Dec 13 01:17:37.281614 kubelet[2714]: E1213 01:17:37.281592 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:37.284869 containerd[1544]: time="2024-12-13T01:17:37.284837490Z" level=info msg="CreateContainer within sandbox \"17821af03002b3d910e571bf4d2f93486bbb21f438f7bcbb61dc377c4a155f52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:37.304754 containerd[1544]: time="2024-12-13T01:17:37.303545742Z" level=info msg="CreateContainer within sandbox \"17821af03002b3d910e571bf4d2f93486bbb21f438f7bcbb61dc377c4a155f52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2224bd09eaeb58a5aa076869a1b2730a6a8ac41a50165e46fb343f1f49b4568\"" Dec 13 01:17:37.304754 containerd[1544]: time="2024-12-13T01:17:37.304064015Z" level=info msg="StartContainer for \"b2224bd09eaeb58a5aa076869a1b2730a6a8ac41a50165e46fb343f1f49b4568\"" Dec 13 01:17:37.355536 containerd[1544]: time="2024-12-13T01:17:37.355440399Z" level=info msg="StartContainer for \"b2224bd09eaeb58a5aa076869a1b2730a6a8ac41a50165e46fb343f1f49b4568\" returns successfully" Dec 13 01:17:38.314856 kubelet[2714]: E1213 01:17:38.313723 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:38.615754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147597375.mount: Deactivated successfully. Dec 13 01:17:39.315050 kubelet[2714]: E1213 01:17:39.315008 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:41.924261 containerd[1544]: time="2024-12-13T01:17:41.924025625Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:41.924674 containerd[1544]: time="2024-12-13T01:17:41.924505100Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137718" Dec 13 01:17:41.928779 containerd[1544]: time="2024-12-13T01:17:41.928734609Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:41.934675 containerd[1544]: time="2024-12-13T01:17:41.934638698Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.718389385s" Dec 13 01:17:41.934750 containerd[1544]: time="2024-12-13T01:17:41.934679698Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:17:41.939871 containerd[1544]: time="2024-12-13T01:17:41.939843316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:17:41.942532 containerd[1544]: time="2024-12-13T01:17:41.942026689Z" level=info msg="CreateContainer within sandbox \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:17:41.952372 containerd[1544]: time="2024-12-13T01:17:41.952310686Z" level=info msg="CreateContainer within sandbox \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\"" Dec 13 01:17:41.953103 containerd[1544]: time="2024-12-13T01:17:41.952738881Z" level=info msg="StartContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\"" Dec 13 01:17:41.997715 containerd[1544]: time="2024-12-13T01:17:41.997667342Z" level=info msg="StartContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" returns successfully" Dec 13 01:17:42.325079 kubelet[2714]: E1213 01:17:42.324452 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.358291 kubelet[2714]: I1213 01:17:42.355673 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4tvgk" podStartSLOduration=1.629407345 podStartE2EDuration="6.355612863s" podCreationTimestamp="2024-12-13 01:17:36 +0000 UTC" firstStartedPulling="2024-12-13 01:17:37.210450716 +0000 UTC m=+15.046462804" lastFinishedPulling="2024-12-13 01:17:41.936656274 +0000 UTC m=+19.772668322" observedRunningTime="2024-12-13 01:17:42.355149468 +0000 UTC m=+20.191161556" watchObservedRunningTime="2024-12-13 01:17:42.355612863 +0000 UTC m=+20.191624951" Dec 13 01:17:42.359484 kubelet[2714]: I1213 01:17:42.358731 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ccnxp" podStartSLOduration=6.355831981 podStartE2EDuration="6.355831981s" podCreationTimestamp="2024-12-13 01:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:38.329097742 +0000 UTC m=+16.165109830" watchObservedRunningTime="2024-12-13 01:17:42.355831981 +0000 UTC m=+20.191844149" Dec 13 01:17:43.322593 kubelet[2714]: E1213 01:17:43.322565 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:45.883229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765701637.mount: Deactivated successfully. Dec 13 01:17:47.093139 containerd[1544]: time="2024-12-13T01:17:47.093094355Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:47.094131 containerd[1544]: time="2024-12-13T01:17:47.094092626Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651574" Dec 13 01:17:47.094848 containerd[1544]: time="2024-12-13T01:17:47.094782779Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:47.096424 containerd[1544]: time="2024-12-13T01:17:47.096313005Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.15643545s" Dec 13 01:17:47.096424 containerd[1544]: time="2024-12-13T01:17:47.096347164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:17:47.098109 containerd[1544]: time="2024-12-13T01:17:47.098083228Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:17:47.112640 containerd[1544]: time="2024-12-13T01:17:47.112600211Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\"" Dec 13 01:17:47.113715 containerd[1544]: time="2024-12-13T01:17:47.113673240Z" level=info msg="StartContainer for \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\"" Dec 13 01:17:47.302113 containerd[1544]: time="2024-12-13T01:17:47.299853960Z" level=info msg="StartContainer for \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\" returns successfully" Dec 13 01:17:47.331056 kubelet[2714]: E1213 01:17:47.329779 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.376500 containerd[1544]: time="2024-12-13T01:17:47.371370163Z" level=info msg="shim disconnected" id=701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730 namespace=k8s.io Dec 13 01:17:47.376843 containerd[1544]: time="2024-12-13T01:17:47.376674513Z" level=warning msg="cleaning up after shim disconnected" id=701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730 namespace=k8s.io Dec 13 01:17:47.376843 containerd[1544]: time="2024-12-13T01:17:47.376698713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:48.106088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730-rootfs.mount: Deactivated successfully. Dec 13 01:17:48.333089 kubelet[2714]: E1213 01:17:48.332707 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:48.336766 containerd[1544]: time="2024-12-13T01:17:48.336728706Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:17:48.348474 containerd[1544]: time="2024-12-13T01:17:48.348426600Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\"" Dec 13 01:17:48.349483 containerd[1544]: time="2024-12-13T01:17:48.349282232Z" level=info msg="StartContainer for \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\"" Dec 13 01:17:48.394386 containerd[1544]: time="2024-12-13T01:17:48.394348581Z" level=info msg="StartContainer for \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\" returns successfully" Dec 13 01:17:48.413974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:17:48.414898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:48.415129 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:48.420799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:48.435911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:48.438831 containerd[1544]: time="2024-12-13T01:17:48.438771856Z" level=info msg="shim disconnected" id=a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f namespace=k8s.io Dec 13 01:17:48.438831 containerd[1544]: time="2024-12-13T01:17:48.438830175Z" level=warning msg="cleaning up after shim disconnected" id=a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f namespace=k8s.io Dec 13 01:17:48.438958 containerd[1544]: time="2024-12-13T01:17:48.438840295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:49.106537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f-rootfs.mount: Deactivated successfully. Dec 13 01:17:49.336343 kubelet[2714]: E1213 01:17:49.336041 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:49.339134 containerd[1544]: time="2024-12-13T01:17:49.338685593Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:17:49.374007 containerd[1544]: time="2024-12-13T01:17:49.373836884Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\"" Dec 13 01:17:49.374344 containerd[1544]: time="2024-12-13T01:17:49.374317319Z" level=info msg="StartContainer for \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\"" Dec 13 01:17:49.428314 containerd[1544]: time="2024-12-13T01:17:49.428256284Z" level=info msg="StartContainer for \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\" returns successfully" Dec 13 01:17:49.463157 containerd[1544]: time="2024-12-13T01:17:49.463091098Z" level=info msg="shim disconnected" id=0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118 namespace=k8s.io Dec 13 01:17:49.463365 containerd[1544]: time="2024-12-13T01:17:49.463154137Z" level=warning msg="cleaning up after shim disconnected" id=0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118 namespace=k8s.io Dec 13 01:17:49.463365 containerd[1544]: time="2024-12-13T01:17:49.463176497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:50.106283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118-rootfs.mount: Deactivated successfully. Dec 13 01:17:50.339671 kubelet[2714]: E1213 01:17:50.339316 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:50.342367 containerd[1544]: time="2024-12-13T01:17:50.342208536Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:17:50.385177 containerd[1544]: time="2024-12-13T01:17:50.385061651Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\"" Dec 13 01:17:50.386737 containerd[1544]: time="2024-12-13T01:17:50.386690277Z" level=info msg="StartContainer for \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\"" Dec 13 01:17:50.432102 containerd[1544]: time="2024-12-13T01:17:50.431964172Z" level=info msg="StartContainer for \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\" returns successfully" Dec 13 01:17:50.450613 containerd[1544]: time="2024-12-13T01:17:50.450315055Z" level=info msg="shim disconnected" id=c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3 namespace=k8s.io Dec 13 01:17:50.450613 containerd[1544]: time="2024-12-13T01:17:50.450535693Z" level=warning msg="cleaning up after shim disconnected" id=c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3 namespace=k8s.io Dec 13 01:17:50.450613 containerd[1544]: time="2024-12-13T01:17:50.450549333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:51.106361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3-rootfs.mount: Deactivated successfully. Dec 13 01:17:51.343405 kubelet[2714]: E1213 01:17:51.343340 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:51.348190 containerd[1544]: time="2024-12-13T01:17:51.348151709Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:17:51.363747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735011134.mount: Deactivated successfully. Dec 13 01:17:51.364158 containerd[1544]: time="2024-12-13T01:17:51.363943139Z" level=info msg="CreateContainer within sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\"" Dec 13 01:17:51.364821 containerd[1544]: time="2024-12-13T01:17:51.364794932Z" level=info msg="StartContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\"" Dec 13 01:17:51.412537 containerd[1544]: time="2024-12-13T01:17:51.412496899Z" level=info msg="StartContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" returns successfully" Dec 13 01:17:51.502867 kubelet[2714]: I1213 01:17:51.502836 2714 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:51.536954 kubelet[2714]: I1213 01:17:51.536877 2714 topology_manager.go:215] "Topology Admit Handler" podUID="4b566d72-3177-42fe-a10f-e43b0b4761a0" podNamespace="kube-system" podName="coredns-76f75df574-8j2k7" Dec 13 01:17:51.546161 kubelet[2714]: I1213 01:17:51.541066 2714 topology_manager.go:215] "Topology Admit Handler" podUID="1e19958d-a1a7-41b4-bf24-1694335ab59e" podNamespace="kube-system" podName="coredns-76f75df574-j92gl" Dec 13 01:17:51.558806 kubelet[2714]: I1213 01:17:51.558769 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e19958d-a1a7-41b4-bf24-1694335ab59e-config-volume\") pod \"coredns-76f75df574-j92gl\" (UID: \"1e19958d-a1a7-41b4-bf24-1694335ab59e\") " pod="kube-system/coredns-76f75df574-j92gl" Dec 13 01:17:51.558806 kubelet[2714]: I1213 01:17:51.558814 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdrc\" (UniqueName: \"kubernetes.io/projected/1e19958d-a1a7-41b4-bf24-1694335ab59e-kube-api-access-pgdrc\") pod \"coredns-76f75df574-j92gl\" (UID: \"1e19958d-a1a7-41b4-bf24-1694335ab59e\") " pod="kube-system/coredns-76f75df574-j92gl" Dec 13 01:17:51.558954 kubelet[2714]: I1213 01:17:51.558836 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b566d72-3177-42fe-a10f-e43b0b4761a0-config-volume\") pod \"coredns-76f75df574-8j2k7\" (UID: \"4b566d72-3177-42fe-a10f-e43b0b4761a0\") " pod="kube-system/coredns-76f75df574-8j2k7" Dec 13 01:17:51.558954 kubelet[2714]: I1213 01:17:51.558910 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dr2l\" (UniqueName: \"kubernetes.io/projected/4b566d72-3177-42fe-a10f-e43b0b4761a0-kube-api-access-5dr2l\") pod \"coredns-76f75df574-8j2k7\" (UID: \"4b566d72-3177-42fe-a10f-e43b0b4761a0\") " pod="kube-system/coredns-76f75df574-8j2k7" Dec 13 01:17:51.851906 kubelet[2714]: E1213 01:17:51.851869 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:51.853713 containerd[1544]: time="2024-12-13T01:17:51.853668866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8j2k7,Uid:4b566d72-3177-42fe-a10f-e43b0b4761a0,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:51.854808 kubelet[2714]: E1213 01:17:51.854717 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:51.856121 containerd[1544]: time="2024-12-13T01:17:51.855552850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j92gl,Uid:1e19958d-a1a7-41b4-bf24-1694335ab59e,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:52.347969 kubelet[2714]: E1213 01:17:52.347929 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:52.362965 kubelet[2714]: I1213 01:17:52.362915 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gr7tf" podStartSLOduration=6.536084669 podStartE2EDuration="16.362873926s" podCreationTimestamp="2024-12-13 01:17:36 +0000 UTC" firstStartedPulling="2024-12-13 01:17:37.269834505 +0000 UTC m=+15.105846553" lastFinishedPulling="2024-12-13 01:17:47.096623722 +0000 UTC m=+24.932635810" observedRunningTime="2024-12-13 01:17:52.361017141 +0000 UTC m=+30.197029229" watchObservedRunningTime="2024-12-13 01:17:52.362873926 +0000 UTC m=+30.198886014" Dec 13 01:17:52.849742 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:39762.service - OpenSSH per-connection server daemon (10.0.0.1:39762). Dec 13 01:17:52.885781 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 39762 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:52.887058 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:52.891531 systemd-logind[1523]: New session 8 of user core. Dec 13 01:17:52.900707 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:17:53.041763 sshd[3568]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:53.046450 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:39762.service: Deactivated successfully. Dec 13 01:17:53.048577 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:17:53.048593 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:17:53.050634 systemd-logind[1523]: Removed session 8. Dec 13 01:17:53.362185 kubelet[2714]: E1213 01:17:53.361811 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:53.678164 systemd-networkd[1230]: cilium_host: Link UP Dec 13 01:17:53.679553 systemd-networkd[1230]: cilium_net: Link UP Dec 13 01:17:53.679556 systemd-networkd[1230]: cilium_net: Gained carrier Dec 13 01:17:53.679835 systemd-networkd[1230]: cilium_host: Gained carrier Dec 13 01:17:53.761153 systemd-networkd[1230]: cilium_vxlan: Link UP Dec 13 01:17:53.761159 systemd-networkd[1230]: cilium_vxlan: Gained carrier Dec 13 01:17:53.927662 systemd-networkd[1230]: cilium_host: Gained IPv6LL Dec 13 01:17:54.084489 kernel: NET: Registered PF_ALG protocol family Dec 13 01:17:54.142742 systemd-networkd[1230]: cilium_net: Gained IPv6LL Dec 13 01:17:54.364916 kubelet[2714]: E1213 01:17:54.364878 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:54.704293 systemd-networkd[1230]: lxc_health: Link UP Dec 13 01:17:54.709414 systemd-networkd[1230]: lxc_health: Gained carrier Dec 13 01:17:54.767586 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL Dec 13 01:17:55.026294 systemd-networkd[1230]: lxc2eb03175615b: Link UP Dec 13 01:17:55.036807 systemd-networkd[1230]: lxcfc41461cef73: Link UP Dec 13 01:17:55.049707 kernel: eth0: renamed from tmpf8735 Dec 13 01:17:55.053018 systemd-networkd[1230]: lxc2eb03175615b: Gained carrier Dec 13 01:17:55.054552 kernel: eth0: renamed from tmpdaac6 Dec 13 01:17:55.060273 systemd-networkd[1230]: lxcfc41461cef73: Gained carrier Dec 13 01:17:55.369089 kubelet[2714]: E1213 01:17:55.368896 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:56.303598 systemd-networkd[1230]: lxc_health: Gained IPv6LL Dec 13 01:17:56.559571 systemd-networkd[1230]: lxc2eb03175615b: Gained IPv6LL Dec 13 01:17:56.814617 systemd-networkd[1230]: lxcfc41461cef73: Gained IPv6LL Dec 13 01:17:58.055945 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:39778.service - OpenSSH per-connection server daemon (10.0.0.1:39778). Dec 13 01:17:58.091766 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 39778 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:58.093178 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:58.097120 systemd-logind[1523]: New session 9 of user core. Dec 13 01:17:58.111735 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:17:58.234649 sshd[3959]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:58.238133 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:39778.service: Deactivated successfully. Dec 13 01:17:58.241846 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:17:58.242666 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:17:58.243812 systemd-logind[1523]: Removed session 9. Dec 13 01:17:58.617559 containerd[1544]: time="2024-12-13T01:17:58.617415082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:58.617559 containerd[1544]: time="2024-12-13T01:17:58.617489202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:58.617559 containerd[1544]: time="2024-12-13T01:17:58.617507602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.618083 containerd[1544]: time="2024-12-13T01:17:58.617600921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.624980 containerd[1544]: time="2024-12-13T01:17:58.624522995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:58.624980 containerd[1544]: time="2024-12-13T01:17:58.624615234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:58.624980 containerd[1544]: time="2024-12-13T01:17:58.624631834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.624980 containerd[1544]: time="2024-12-13T01:17:58.624750433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:58.641841 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:17:58.645212 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:17:58.663475 containerd[1544]: time="2024-12-13T01:17:58.663038816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j92gl,Uid:1e19958d-a1a7-41b4-bf24-1694335ab59e,Namespace:kube-system,Attempt:0,} returns sandbox id \"daac647d0ab87c2f7464c8760337e346be3d74549cbe8dafb52d9053f1cde155\"" Dec 13 01:17:58.664268 kubelet[2714]: E1213 01:17:58.664244 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:58.667647 containerd[1544]: time="2024-12-13T01:17:58.667609345Z" level=info msg="CreateContainer within sandbox \"daac647d0ab87c2f7464c8760337e346be3d74549cbe8dafb52d9053f1cde155\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:58.668811 containerd[1544]: time="2024-12-13T01:17:58.668772137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8j2k7,Uid:4b566d72-3177-42fe-a10f-e43b0b4761a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f87350812ed2aef34e3a68da80039a6f00960711a46cd18dca8e1a9ed03f2a86\"" Dec 13 01:17:58.669557 kubelet[2714]: E1213 01:17:58.669363 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:58.671172 containerd[1544]: time="2024-12-13T01:17:58.671136041Z" level=info msg="CreateContainer within sandbox \"f87350812ed2aef34e3a68da80039a6f00960711a46cd18dca8e1a9ed03f2a86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:58.706070 containerd[1544]: time="2024-12-13T01:17:58.705930127Z" level=info msg="CreateContainer within sandbox \"f87350812ed2aef34e3a68da80039a6f00960711a46cd18dca8e1a9ed03f2a86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f390459b32a16160a1a08cbffb05f4f862eb5ec854866ce1aa3f0bd25c767c11\"" Dec 13 01:17:58.707348 containerd[1544]: time="2024-12-13T01:17:58.706606002Z" level=info msg="StartContainer for \"f390459b32a16160a1a08cbffb05f4f862eb5ec854866ce1aa3f0bd25c767c11\"" Dec 13 01:17:58.708746 containerd[1544]: time="2024-12-13T01:17:58.707685115Z" level=info msg="CreateContainer within sandbox \"daac647d0ab87c2f7464c8760337e346be3d74549cbe8dafb52d9053f1cde155\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9558e8e78ec8ac71e4a970ef68f5475f582e72197ac96408ba499d29cb57ea97\"" Dec 13 01:17:58.708746 containerd[1544]: time="2024-12-13T01:17:58.708059153Z" level=info msg="StartContainer for \"9558e8e78ec8ac71e4a970ef68f5475f582e72197ac96408ba499d29cb57ea97\"" Dec 13 01:17:58.763532 containerd[1544]: time="2024-12-13T01:17:58.763174662Z" level=info msg="StartContainer for \"f390459b32a16160a1a08cbffb05f4f862eb5ec854866ce1aa3f0bd25c767c11\" returns successfully" Dec 13 01:17:58.763532 containerd[1544]: time="2024-12-13T01:17:58.763174742Z" level=info msg="StartContainer for \"9558e8e78ec8ac71e4a970ef68f5475f582e72197ac96408ba499d29cb57ea97\" returns successfully" Dec 13 01:17:59.378172 kubelet[2714]: E1213 01:17:59.378021 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.379486 kubelet[2714]: E1213 01:17:59.379390 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.390181 kubelet[2714]: I1213 01:17:59.389987 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-j92gl" podStartSLOduration=23.389941909 podStartE2EDuration="23.389941909s" podCreationTimestamp="2024-12-13 01:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:59.387725564 +0000 UTC m=+37.223737652" watchObservedRunningTime="2024-12-13 01:17:59.389941909 +0000 UTC m=+37.225953957" Dec 13 01:18:00.383082 kubelet[2714]: E1213 01:18:00.381529 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.383308 kubelet[2714]: E1213 01:18:01.383266 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.855919 kubelet[2714]: E1213 01:18:01.855051 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.871362 kubelet[2714]: I1213 01:18:01.870435 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8j2k7" podStartSLOduration=25.870378565 podStartE2EDuration="25.870378565s" podCreationTimestamp="2024-12-13 01:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:59.399830004 +0000 UTC m=+37.235842092" watchObservedRunningTime="2024-12-13 01:18:01.870378565 +0000 UTC m=+39.706390693" Dec 13 01:18:02.385127 kubelet[2714]: E1213 01:18:02.385084 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:03.244689 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:56528.service - OpenSSH per-connection server daemon (10.0.0.1:56528). Dec 13 01:18:03.287834 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 56528 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:03.289640 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:03.298440 systemd-logind[1523]: New session 10 of user core. Dec 13 01:18:03.311814 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:18:03.435558 sshd[4147]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:03.439933 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:56528.service: Deactivated successfully. Dec 13 01:18:03.442744 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:18:03.443689 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:18:03.444590 systemd-logind[1523]: Removed session 10. Dec 13 01:18:05.309425 kubelet[2714]: I1213 01:18:05.309350 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:18:05.310486 kubelet[2714]: E1213 01:18:05.310401 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:05.396955 kubelet[2714]: E1213 01:18:05.396761 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:08.451738 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:56540.service - OpenSSH per-connection server daemon (10.0.0.1:56540). Dec 13 01:18:08.484084 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 56540 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:08.485496 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.492374 systemd-logind[1523]: New session 11 of user core. Dec 13 01:18:08.505726 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:18:08.630791 sshd[4166]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.641716 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:56542.service - OpenSSH per-connection server daemon (10.0.0.1:56542). Dec 13 01:18:08.642100 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:56540.service: Deactivated successfully. Dec 13 01:18:08.644657 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:18:08.645961 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:18:08.647125 systemd-logind[1523]: Removed session 11. Dec 13 01:18:08.674937 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 56542 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:08.676511 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.680989 systemd-logind[1523]: New session 12 of user core. Dec 13 01:18:08.694738 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:18:08.867772 sshd[4179]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.880868 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:56554.service - OpenSSH per-connection server daemon (10.0.0.1:56554). Dec 13 01:18:08.883185 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:56542.service: Deactivated successfully. Dec 13 01:18:08.892586 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:18:08.894376 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:18:08.897452 systemd-logind[1523]: Removed session 12. Dec 13 01:18:08.926675 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 56554 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:08.928267 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.932564 systemd-logind[1523]: New session 13 of user core. Dec 13 01:18:08.944746 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:18:09.070749 sshd[4192]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:09.073946 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:56554.service: Deactivated successfully. Dec 13 01:18:09.076720 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:18:09.077783 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:18:09.079056 systemd-logind[1523]: Removed session 13. Dec 13 01:18:14.078847 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:60446.service - OpenSSH per-connection server daemon (10.0.0.1:60446). Dec 13 01:18:14.108237 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 60446 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:14.109652 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:14.113845 systemd-logind[1523]: New session 14 of user core. Dec 13 01:18:14.124771 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:18:14.244057 sshd[4210]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:14.247969 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:60446.service: Deactivated successfully. Dec 13 01:18:14.250127 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:18:14.250128 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:18:14.251298 systemd-logind[1523]: Removed session 14. Dec 13 01:18:19.258693 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:60460.service - OpenSSH per-connection server daemon (10.0.0.1:60460). Dec 13 01:18:19.292111 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 60460 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:19.293405 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:19.297509 systemd-logind[1523]: New session 15 of user core. Dec 13 01:18:19.306814 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:18:19.415693 sshd[4225]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:19.427720 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:60464.service - OpenSSH per-connection server daemon (10.0.0.1:60464). Dec 13 01:18:19.428196 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:60460.service: Deactivated successfully. Dec 13 01:18:19.430372 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:18:19.433041 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:18:19.434605 systemd-logind[1523]: Removed session 15. Dec 13 01:18:19.457484 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 60464 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:19.458533 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:19.462568 systemd-logind[1523]: New session 16 of user core. Dec 13 01:18:19.471941 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:18:19.707033 sshd[4238]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:19.717199 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:60478.service - OpenSSH per-connection server daemon (10.0.0.1:60478). Dec 13 01:18:19.717800 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:60464.service: Deactivated successfully. Dec 13 01:18:19.720383 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:18:19.721779 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:18:19.722736 systemd-logind[1523]: Removed session 16. Dec 13 01:18:19.751474 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 60478 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:19.752779 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:19.756589 systemd-logind[1523]: New session 17 of user core. Dec 13 01:18:19.772745 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:21.119419 sshd[4251]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:21.128447 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:60482.service - OpenSSH per-connection server daemon (10.0.0.1:60482). Dec 13 01:18:21.129503 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:60478.service: Deactivated successfully. Dec 13 01:18:21.136116 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:21.142281 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:21.148945 systemd-logind[1523]: Removed session 17. Dec 13 01:18:21.184483 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 60482 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:21.185477 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:21.190017 systemd-logind[1523]: New session 18 of user core. Dec 13 01:18:21.196709 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:21.410955 sshd[4270]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:21.424726 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:60488.service - OpenSSH per-connection server daemon (10.0.0.1:60488). Dec 13 01:18:21.425192 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:60482.service: Deactivated successfully. Dec 13 01:18:21.429198 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:21.431332 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:21.432743 systemd-logind[1523]: Removed session 18. Dec 13 01:18:21.460025 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 60488 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:21.461367 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:21.465193 systemd-logind[1523]: New session 19 of user core. Dec 13 01:18:21.478120 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:21.592435 sshd[4286]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:21.594992 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:60488.service: Deactivated successfully. Dec 13 01:18:21.600031 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:21.600685 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:21.603488 systemd-logind[1523]: Removed session 19. Dec 13 01:18:26.607775 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:36826.service - OpenSSH per-connection server daemon (10.0.0.1:36826). Dec 13 01:18:26.637504 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 36826 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:26.638044 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:26.644019 systemd-logind[1523]: New session 20 of user core. Dec 13 01:18:26.650728 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:26.773734 sshd[4309]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:26.778545 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:36826.service: Deactivated successfully. Dec 13 01:18:26.782873 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:26.785444 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:26.786820 systemd-logind[1523]: Removed session 20. Dec 13 01:18:31.788915 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:36832.service - OpenSSH per-connection server daemon (10.0.0.1:36832). Dec 13 01:18:31.818428 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 36832 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:31.819741 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:31.826596 systemd-logind[1523]: New session 21 of user core. Dec 13 01:18:31.836817 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:31.956172 sshd[4324]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:31.960312 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:36832.service: Deactivated successfully. Dec 13 01:18:31.962830 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:31.962868 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:31.964261 systemd-logind[1523]: Removed session 21. Dec 13 01:18:36.966747 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:34140.service - OpenSSH per-connection server daemon (10.0.0.1:34140). Dec 13 01:18:36.998162 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 34140 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:36.999342 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:37.003006 systemd-logind[1523]: New session 22 of user core. Dec 13 01:18:37.009688 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:18:37.115761 sshd[4339]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:37.118104 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:34140.service: Deactivated successfully. Dec 13 01:18:37.121257 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:18:37.128907 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Dec 13 01:18:37.129324 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:18:37.132665 systemd-logind[1523]: Removed session 22. Dec 13 01:18:37.158001 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:37.159237 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:37.163266 systemd-logind[1523]: New session 23 of user core. Dec 13 01:18:37.183715 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:18:38.880098 containerd[1544]: time="2024-12-13T01:18:38.879853269Z" level=info msg="StopContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" with timeout 30 (s)" Dec 13 01:18:38.880511 containerd[1544]: time="2024-12-13T01:18:38.880445835Z" level=info msg="Stop container \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" with signal terminated" Dec 13 01:18:38.914559 containerd[1544]: time="2024-12-13T01:18:38.914514325Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:18:38.924987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c-rootfs.mount: Deactivated successfully. Dec 13 01:18:38.936129 containerd[1544]: time="2024-12-13T01:18:38.936064838Z" level=info msg="shim disconnected" id=0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c namespace=k8s.io Dec 13 01:18:38.936129 containerd[1544]: time="2024-12-13T01:18:38.936124039Z" level=warning msg="cleaning up after shim disconnected" id=0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c namespace=k8s.io Dec 13 01:18:38.936129 containerd[1544]: time="2024-12-13T01:18:38.936132839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:38.944127 containerd[1544]: time="2024-12-13T01:18:38.944002484Z" level=info msg="StopContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" with timeout 2 (s)" Dec 13 01:18:38.944380 containerd[1544]: time="2024-12-13T01:18:38.944355048Z" level=info msg="Stop container \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" with signal terminated" Dec 13 01:18:38.951251 systemd-networkd[1230]: lxc_health: Link DOWN Dec 13 01:18:38.951256 systemd-networkd[1230]: lxc_health: Lost carrier Dec 13 01:18:38.980636 containerd[1544]: time="2024-12-13T01:18:38.980566081Z" level=info msg="StopContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" returns successfully" Dec 13 01:18:38.981298 containerd[1544]: time="2024-12-13T01:18:38.981270928Z" level=info msg="StopPodSandbox for \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\"" Dec 13 01:18:38.981547 containerd[1544]: time="2024-12-13T01:18:38.981431690Z" level=info msg="Container to stop \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:38.983196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2-shm.mount: Deactivated successfully. Dec 13 01:18:38.996245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6-rootfs.mount: Deactivated successfully. Dec 13 01:18:38.998261 containerd[1544]: time="2024-12-13T01:18:38.997958869Z" level=info msg="shim disconnected" id=2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6 namespace=k8s.io Dec 13 01:18:38.998261 containerd[1544]: time="2024-12-13T01:18:38.998041230Z" level=warning msg="cleaning up after shim disconnected" id=2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6 namespace=k8s.io Dec 13 01:18:38.998261 containerd[1544]: time="2024-12-13T01:18:38.998049990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:39.009274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2-rootfs.mount: Deactivated successfully. Dec 13 01:18:39.014105 containerd[1544]: time="2024-12-13T01:18:39.014060478Z" level=info msg="StopContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" returns successfully" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014650964Z" level=info msg="StopPodSandbox for \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014682765Z" level=info msg="Container to stop \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014695165Z" level=info msg="Container to stop \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014704365Z" level=info msg="Container to stop \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014713325Z" level=info msg="Container to stop \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014722605Z" level=info msg="Container to stop \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014659084Z" level=info msg="shim disconnected" id=6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2 namespace=k8s.io Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014834726Z" level=warning msg="cleaning up after shim disconnected" id=6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2 namespace=k8s.io Dec 13 01:18:39.014862 containerd[1544]: time="2024-12-13T01:18:39.014843486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:39.028060 containerd[1544]: time="2024-12-13T01:18:39.028001344Z" level=info msg="TearDown network for sandbox \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\" successfully" Dec 13 01:18:39.028060 containerd[1544]: time="2024-12-13T01:18:39.028044304Z" level=info msg="StopPodSandbox for \"6c12189639e0475ce4d73ab895d63a385530ceeae26ab95a0be42ce87b16f4d2\" returns successfully" Dec 13 01:18:39.048998 containerd[1544]: time="2024-12-13T01:18:39.048813641Z" level=info msg="shim disconnected" id=17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3 namespace=k8s.io Dec 13 01:18:39.048998 containerd[1544]: time="2024-12-13T01:18:39.048864802Z" level=warning msg="cleaning up after shim disconnected" id=17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3 namespace=k8s.io Dec 13 01:18:39.048998 containerd[1544]: time="2024-12-13T01:18:39.048872842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:39.059911 containerd[1544]: time="2024-12-13T01:18:39.059864117Z" level=info msg="TearDown network for sandbox \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" successfully" Dec 13 01:18:39.059911 containerd[1544]: time="2024-12-13T01:18:39.059908837Z" level=info msg="StopPodSandbox for \"17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3\" returns successfully" Dec 13 01:18:39.149544 kubelet[2714]: I1213 01:18:39.149503 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-run\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150189 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cni-path\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150240 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9bmt\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150260 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-lib-modules\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150277 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-kernel\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150327 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-config-path\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150598 kubelet[2714]: I1213 01:18:39.150349 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-etc-cni-netd\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150366 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-cgroup\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150386 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jt2s\" (UniqueName: \"kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s\") pod \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\" (UID: \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150409 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3906de88-90ad-43fc-95db-0447f3b111bf-clustermesh-secrets\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150431 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-cilium-config-path\") pod \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\" (UID: \"79ebdaa2-6d60-46f6-b735-3b02d42fe04e\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150448 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-hostproc\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.150757 kubelet[2714]: I1213 01:18:39.150479 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-bpf-maps\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.155003 kubelet[2714]: I1213 01:18:39.154971 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.155124 kubelet[2714]: I1213 01:18:39.155066 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.155613 kubelet[2714]: I1213 01:18:39.155367 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.155693 kubelet[2714]: I1213 01:18:39.155636 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.155726 kubelet[2714]: I1213 01:18:39.155711 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.160129 kubelet[2714]: I1213 01:18:39.159850 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3906de88-90ad-43fc-95db-0447f3b111bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:18:39.160129 kubelet[2714]: I1213 01:18:39.159903 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161577 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-xtables-lock\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161621 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-hubble-tls\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161639 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-net\") pod \"3906de88-90ad-43fc-95db-0447f3b111bf\" (UID: \"3906de88-90ad-43fc-95db-0447f3b111bf\") " Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161646 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s" (OuterVolumeSpecName: "kube-api-access-9jt2s") pod "79ebdaa2-6d60-46f6-b735-3b02d42fe04e" (UID: "79ebdaa2-6d60-46f6-b735-3b02d42fe04e"). InnerVolumeSpecName "kube-api-access-9jt2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161681 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.163696 kubelet[2714]: I1213 01:18:39.161691 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161704 2714 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161715 2714 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161737 2714 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161747 2714 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161756 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161766 2714 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3906de88-90ad-43fc-95db-0447f3b111bf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.163869 kubelet[2714]: I1213 01:18:39.161791 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.164029 kubelet[2714]: I1213 01:18:39.163631 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:39.164029 kubelet[2714]: I1213 01:18:39.163635 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79ebdaa2-6d60-46f6-b735-3b02d42fe04e" (UID: "79ebdaa2-6d60-46f6-b735-3b02d42fe04e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:39.164029 kubelet[2714]: I1213 01:18:39.163659 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.164029 kubelet[2714]: I1213 01:18:39.163687 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:39.164224 kubelet[2714]: I1213 01:18:39.164198 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt" (OuterVolumeSpecName: "kube-api-access-m9bmt") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "kube-api-access-m9bmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:39.164317 kubelet[2714]: I1213 01:18:39.164287 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3906de88-90ad-43fc-95db-0447f3b111bf" (UID: "3906de88-90ad-43fc-95db-0447f3b111bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:39.262741 kubelet[2714]: I1213 01:18:39.262693 2714 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9jt2s\" (UniqueName: \"kubernetes.io/projected/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-kube-api-access-9jt2s\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262741 kubelet[2714]: I1213 01:18:39.262730 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79ebdaa2-6d60-46f6-b735-3b02d42fe04e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262741 kubelet[2714]: I1213 01:18:39.262744 2714 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262741 kubelet[2714]: I1213 01:18:39.262754 2714 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262930 kubelet[2714]: I1213 01:18:39.262763 2714 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262930 kubelet[2714]: I1213 01:18:39.262773 2714 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262930 kubelet[2714]: I1213 01:18:39.262782 2714 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3906de88-90ad-43fc-95db-0447f3b111bf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262930 kubelet[2714]: I1213 01:18:39.262793 2714 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m9bmt\" (UniqueName: \"kubernetes.io/projected/3906de88-90ad-43fc-95db-0447f3b111bf-kube-api-access-m9bmt\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.262930 kubelet[2714]: I1213 01:18:39.262803 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3906de88-90ad-43fc-95db-0447f3b111bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:39.474872 kubelet[2714]: I1213 01:18:39.474786 2714 scope.go:117] "RemoveContainer" containerID="2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6" Dec 13 01:18:39.477745 containerd[1544]: time="2024-12-13T01:18:39.477420120Z" level=info msg="RemoveContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\"" Dec 13 01:18:39.480562 containerd[1544]: time="2024-12-13T01:18:39.480509312Z" level=info msg="RemoveContainer for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" returns successfully" Dec 13 01:18:39.481266 kubelet[2714]: I1213 01:18:39.480835 2714 scope.go:117] "RemoveContainer" containerID="c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3" Dec 13 01:18:39.482958 containerd[1544]: time="2024-12-13T01:18:39.482916177Z" level=info msg="RemoveContainer for \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\"" Dec 13 01:18:39.485148 containerd[1544]: time="2024-12-13T01:18:39.485105000Z" level=info msg="RemoveContainer for \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\" returns successfully" Dec 13 01:18:39.485291 kubelet[2714]: I1213 01:18:39.485256 2714 scope.go:117] "RemoveContainer" containerID="0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118" Dec 13 01:18:39.489050 containerd[1544]: time="2024-12-13T01:18:39.488959600Z" level=info msg="RemoveContainer for \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\"" Dec 13 01:18:39.493041 containerd[1544]: time="2024-12-13T01:18:39.492981643Z" level=info msg="RemoveContainer for \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\" returns successfully" Dec 13 01:18:39.493476 kubelet[2714]: I1213 01:18:39.493307 2714 scope.go:117] "RemoveContainer" containerID="a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f" Dec 13 01:18:39.495078 containerd[1544]: time="2024-12-13T01:18:39.495043464Z" level=info msg="RemoveContainer for \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\"" Dec 13 01:18:39.498403 containerd[1544]: time="2024-12-13T01:18:39.498275778Z" level=info msg="RemoveContainer for \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\" returns successfully" Dec 13 01:18:39.499011 kubelet[2714]: I1213 01:18:39.498538 2714 scope.go:117] "RemoveContainer" containerID="701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730" Dec 13 01:18:39.499895 containerd[1544]: time="2024-12-13T01:18:39.499850034Z" level=info msg="RemoveContainer for \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\"" Dec 13 01:18:39.501793 containerd[1544]: time="2024-12-13T01:18:39.501766694Z" level=info msg="RemoveContainer for \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\" returns successfully" Dec 13 01:18:39.502078 kubelet[2714]: I1213 01:18:39.502017 2714 scope.go:117] "RemoveContainer" containerID="2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6" Dec 13 01:18:39.502473 containerd[1544]: time="2024-12-13T01:18:39.502359580Z" level=error msg="ContainerStatus for \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\": not found" Dec 13 01:18:39.502661 kubelet[2714]: E1213 01:18:39.502643 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\": not found" containerID="2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6" Dec 13 01:18:39.505599 kubelet[2714]: I1213 01:18:39.505578 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6"} err="failed to get container status \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f0eb563ef8c97d35ed1cbdcf8c3d393411202f618fbdd24284958840fc720a6\": not found" Dec 13 01:18:39.505668 kubelet[2714]: I1213 01:18:39.505607 2714 scope.go:117] "RemoveContainer" containerID="c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3" Dec 13 01:18:39.505892 containerd[1544]: time="2024-12-13T01:18:39.505814777Z" level=error msg="ContainerStatus for \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\": not found" Dec 13 01:18:39.505997 kubelet[2714]: E1213 01:18:39.505941 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\": not found" containerID="c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3" Dec 13 01:18:39.505997 kubelet[2714]: I1213 01:18:39.505974 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3"} err="failed to get container status \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c49708d6d936d19d61dad9f224a5dc83f2075c889fa8115c5015a02d97ed28b3\": not found" Dec 13 01:18:39.505997 kubelet[2714]: I1213 01:18:39.505985 2714 scope.go:117] "RemoveContainer" containerID="0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118" Dec 13 01:18:39.506375 containerd[1544]: time="2024-12-13T01:18:39.506309342Z" level=error msg="ContainerStatus for \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\": not found" Dec 13 01:18:39.506436 kubelet[2714]: E1213 01:18:39.506425 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\": not found" containerID="0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118" Dec 13 01:18:39.506546 kubelet[2714]: I1213 01:18:39.506465 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118"} err="failed to get container status \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\": rpc error: code = NotFound desc = an error occurred when try to find container \"0019eb6ccc93af5927b3131a0891e137ad54bae8a10672f0494ce67f9c404118\": not found" Dec 13 01:18:39.506546 kubelet[2714]: I1213 01:18:39.506476 2714 scope.go:117] "RemoveContainer" containerID="a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f" Dec 13 01:18:39.506881 containerd[1544]: time="2024-12-13T01:18:39.506822107Z" level=error msg="ContainerStatus for \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\": not found" Dec 13 01:18:39.506950 kubelet[2714]: E1213 01:18:39.506936 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\": not found" containerID="a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f" Dec 13 01:18:39.506991 kubelet[2714]: I1213 01:18:39.506966 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f"} err="failed to get container status \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a86fc2d0d9c33233af302a2cdbd7f84c947f63b93ef860bf649304335568a97f\": not found" Dec 13 01:18:39.506991 kubelet[2714]: I1213 01:18:39.506977 2714 scope.go:117] "RemoveContainer" containerID="701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730" Dec 13 01:18:39.507142 containerd[1544]: time="2024-12-13T01:18:39.507113750Z" level=error msg="ContainerStatus for \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\": not found" Dec 13 01:18:39.507237 kubelet[2714]: E1213 01:18:39.507222 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\": not found" containerID="701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730" Dec 13 01:18:39.507276 kubelet[2714]: I1213 01:18:39.507248 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730"} err="failed to get container status \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\": rpc error: code = NotFound desc = an error occurred when try to find container \"701ee08c324bfc014d2ab3def48d16f1e5c0f27ddff8015b9f6fee5f25726730\": not found" Dec 13 01:18:39.507276 kubelet[2714]: I1213 01:18:39.507258 2714 scope.go:117] "RemoveContainer" containerID="0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c" Dec 13 01:18:39.508090 containerd[1544]: time="2024-12-13T01:18:39.508063480Z" level=info msg="RemoveContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\"" Dec 13 01:18:39.510111 containerd[1544]: time="2024-12-13T01:18:39.510085621Z" level=info msg="RemoveContainer for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" returns successfully" Dec 13 01:18:39.510524 kubelet[2714]: I1213 01:18:39.510243 2714 scope.go:117] "RemoveContainer" containerID="0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c" Dec 13 01:18:39.510588 containerd[1544]: time="2024-12-13T01:18:39.510441865Z" level=error msg="ContainerStatus for \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\": not found" Dec 13 01:18:39.510634 kubelet[2714]: E1213 01:18:39.510583 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\": not found" containerID="0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c" Dec 13 01:18:39.510634 kubelet[2714]: I1213 01:18:39.510610 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c"} err="failed to get container status \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0401a1ab3c3832a15c9251936e36fb9d2be1b5f14f47afb04e38f9c538a73c7c\": not found" Dec 13 01:18:39.901051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3-rootfs.mount: Deactivated successfully. Dec 13 01:18:39.901197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17cbd77915d5894f1008e7303b6fd953019d5c817a4b459d498b9a3f4f9190a3-shm.mount: Deactivated successfully. Dec 13 01:18:39.901282 systemd[1]: var-lib-kubelet-pods-3906de88\x2d90ad\x2d43fc\x2d95db\x2d0447f3b111bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm9bmt.mount: Deactivated successfully. Dec 13 01:18:39.901371 systemd[1]: var-lib-kubelet-pods-79ebdaa2\x2d6d60\x2d46f6\x2db735\x2d3b02d42fe04e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9jt2s.mount: Deactivated successfully. Dec 13 01:18:39.901449 systemd[1]: var-lib-kubelet-pods-3906de88\x2d90ad\x2d43fc\x2d95db\x2d0447f3b111bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:18:39.901541 systemd[1]: var-lib-kubelet-pods-3906de88\x2d90ad\x2d43fc\x2d95db\x2d0447f3b111bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:18:40.280396 kubelet[2714]: I1213 01:18:40.280296 2714 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" path="/var/lib/kubelet/pods/3906de88-90ad-43fc-95db-0447f3b111bf/volumes" Dec 13 01:18:40.280991 kubelet[2714]: I1213 01:18:40.280889 2714 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="79ebdaa2-6d60-46f6-b735-3b02d42fe04e" path="/var/lib/kubelet/pods/79ebdaa2-6d60-46f6-b735-3b02d42fe04e/volumes" Dec 13 01:18:40.831994 sshd[4355]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:40.840959 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:34156.service - OpenSSH per-connection server daemon (10.0.0.1:34156). Dec 13 01:18:40.841796 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:34152.service: Deactivated successfully. Dec 13 01:18:40.844624 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:18:40.845712 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:18:40.846551 systemd-logind[1523]: Removed session 23. Dec 13 01:18:40.872203 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 34156 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:40.873491 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:40.877622 systemd-logind[1523]: New session 24 of user core. Dec 13 01:18:40.890699 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:18:41.276286 kubelet[2714]: E1213 01:18:41.275922 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:42.275654 kubelet[2714]: E1213 01:18:42.275616 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:42.331100 kubelet[2714]: E1213 01:18:42.331039 2714 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:42.411108 sshd[4521]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:42.420135 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:34162.service - OpenSSH per-connection server daemon (10.0.0.1:34162). Dec 13 01:18:42.421204 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:34156.service: Deactivated successfully. Dec 13 01:18:42.424757 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:18:42.426309 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:18:42.431541 kubelet[2714]: I1213 01:18:42.431419 2714 topology_manager.go:215] "Topology Admit Handler" podUID="01f7e9bd-6525-49e2-9192-99785cf36025" podNamespace="kube-system" podName="cilium-dsc45" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431509 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79ebdaa2-6d60-46f6-b735-3b02d42fe04e" containerName="cilium-operator" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431522 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="mount-cgroup" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431530 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="mount-bpf-fs" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431537 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="cilium-agent" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431544 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="apply-sysctl-overwrites" Dec 13 01:18:42.431541 kubelet[2714]: E1213 01:18:42.431551 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="clean-cilium-state" Dec 13 01:18:42.432781 systemd-logind[1523]: Removed session 24. Dec 13 01:18:42.440494 kubelet[2714]: I1213 01:18:42.438913 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ebdaa2-6d60-46f6-b735-3b02d42fe04e" containerName="cilium-operator" Dec 13 01:18:42.440494 kubelet[2714]: I1213 01:18:42.438962 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="3906de88-90ad-43fc-95db-0447f3b111bf" containerName="cilium-agent" Dec 13 01:18:42.455362 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 34162 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:42.456846 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:42.467284 systemd-logind[1523]: New session 25 of user core. Dec 13 01:18:42.472857 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.479887 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmsr\" (UniqueName: \"kubernetes.io/projected/01f7e9bd-6525-49e2-9192-99785cf36025-kube-api-access-fmmsr\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.479931 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-bpf-maps\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.479955 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01f7e9bd-6525-49e2-9192-99785cf36025-cilium-ipsec-secrets\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.479974 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-cilium-cgroup\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.480001 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-cni-path\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480371 kubelet[2714]: I1213 01:18:42.480019 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-cilium-run\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480040 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01f7e9bd-6525-49e2-9192-99785cf36025-clustermesh-secrets\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480070 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-host-proc-sys-kernel\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480092 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-lib-modules\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480112 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01f7e9bd-6525-49e2-9192-99785cf36025-hubble-tls\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480133 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-etc-cni-netd\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480662 kubelet[2714]: I1213 01:18:42.480152 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-host-proc-sys-net\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480784 kubelet[2714]: I1213 01:18:42.480171 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-hostproc\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480784 kubelet[2714]: I1213 01:18:42.480192 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f7e9bd-6525-49e2-9192-99785cf36025-xtables-lock\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.480784 kubelet[2714]: I1213 01:18:42.480210 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01f7e9bd-6525-49e2-9192-99785cf36025-cilium-config-path\") pod \"cilium-dsc45\" (UID: \"01f7e9bd-6525-49e2-9192-99785cf36025\") " pod="kube-system/cilium-dsc45" Dec 13 01:18:42.522006 sshd[4535]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:42.526286 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:34162.service: Deactivated successfully. Dec 13 01:18:42.529327 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:18:42.530547 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:18:42.542911 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:34104.service - OpenSSH per-connection server daemon (10.0.0.1:34104). Dec 13 01:18:42.543808 systemd-logind[1523]: Removed session 25. Dec 13 01:18:42.575367 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 34104 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:42.576709 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:42.580714 systemd-logind[1523]: New session 26 of user core. Dec 13 01:18:42.591662 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:18:42.744118 kubelet[2714]: E1213 01:18:42.744075 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:42.744790 containerd[1544]: time="2024-12-13T01:18:42.744737971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsc45,Uid:01f7e9bd-6525-49e2-9192-99785cf36025,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:42.761830 containerd[1544]: time="2024-12-13T01:18:42.761370326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:42.761830 containerd[1544]: time="2024-12-13T01:18:42.761793010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:42.761830 containerd[1544]: time="2024-12-13T01:18:42.761806530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:42.762212 containerd[1544]: time="2024-12-13T01:18:42.761906091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:42.791694 containerd[1544]: time="2024-12-13T01:18:42.791535809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsc45,Uid:01f7e9bd-6525-49e2-9192-99785cf36025,Namespace:kube-system,Attempt:0,} returns sandbox id \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\"" Dec 13 01:18:42.792124 kubelet[2714]: E1213 01:18:42.792099 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:42.794060 containerd[1544]: time="2024-12-13T01:18:42.794017592Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:18:42.803712 containerd[1544]: time="2024-12-13T01:18:42.803665162Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4d31aa2e6eedb56fc01a9004d6e6a897acdb9b8588c81474f488883847ec678\"" Dec 13 01:18:42.804496 containerd[1544]: time="2024-12-13T01:18:42.804119326Z" level=info msg="StartContainer for \"c4d31aa2e6eedb56fc01a9004d6e6a897acdb9b8588c81474f488883847ec678\"" Dec 13 01:18:42.863102 containerd[1544]: time="2024-12-13T01:18:42.863052998Z" level=info msg="StartContainer for \"c4d31aa2e6eedb56fc01a9004d6e6a897acdb9b8588c81474f488883847ec678\" returns successfully" Dec 13 01:18:42.907650 containerd[1544]: time="2024-12-13T01:18:42.907554214Z" level=info msg="shim disconnected" id=c4d31aa2e6eedb56fc01a9004d6e6a897acdb9b8588c81474f488883847ec678 namespace=k8s.io Dec 13 01:18:42.907650 containerd[1544]: time="2024-12-13T01:18:42.907639615Z" level=warning msg="cleaning up after shim disconnected" id=c4d31aa2e6eedb56fc01a9004d6e6a897acdb9b8588c81474f488883847ec678 namespace=k8s.io Dec 13 01:18:42.907650 containerd[1544]: time="2024-12-13T01:18:42.907658655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:43.479779 kubelet[2714]: E1213 01:18:43.479749 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:43.482831 containerd[1544]: time="2024-12-13T01:18:43.482793191Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:18:43.501697 containerd[1544]: time="2024-12-13T01:18:43.501647481Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1632ac586483060da84307d7e16caea3f4245bef51a434ed1febfdead05ed4e\"" Dec 13 01:18:43.503418 containerd[1544]: time="2024-12-13T01:18:43.502974813Z" level=info msg="StartContainer for \"a1632ac586483060da84307d7e16caea3f4245bef51a434ed1febfdead05ed4e\"" Dec 13 01:18:43.544916 containerd[1544]: time="2024-12-13T01:18:43.544865150Z" level=info msg="StartContainer for \"a1632ac586483060da84307d7e16caea3f4245bef51a434ed1febfdead05ed4e\" returns successfully" Dec 13 01:18:43.562298 containerd[1544]: time="2024-12-13T01:18:43.562244027Z" level=info msg="shim disconnected" id=a1632ac586483060da84307d7e16caea3f4245bef51a434ed1febfdead05ed4e namespace=k8s.io Dec 13 01:18:43.562298 containerd[1544]: time="2024-12-13T01:18:43.562296268Z" level=warning msg="cleaning up after shim disconnected" id=a1632ac586483060da84307d7e16caea3f4245bef51a434ed1febfdead05ed4e namespace=k8s.io Dec 13 01:18:43.562298 containerd[1544]: time="2024-12-13T01:18:43.562305548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:43.571152 containerd[1544]: time="2024-12-13T01:18:43.571112467Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:18:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:18:43.593407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097160564.mount: Deactivated successfully. Dec 13 01:18:44.134335 kubelet[2714]: I1213 01:18:44.134037 2714 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:18:44Z","lastTransitionTime":"2024-12-13T01:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:18:44.484221 kubelet[2714]: E1213 01:18:44.484179 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:44.487848 containerd[1544]: time="2024-12-13T01:18:44.487812609Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:18:44.506159 containerd[1544]: time="2024-12-13T01:18:44.506105327Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33\"" Dec 13 01:18:44.506829 containerd[1544]: time="2024-12-13T01:18:44.506796413Z" level=info msg="StartContainer for \"bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33\"" Dec 13 01:18:44.550718 containerd[1544]: time="2024-12-13T01:18:44.550677754Z" level=info msg="StartContainer for \"bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33\" returns successfully" Dec 13 01:18:44.572387 containerd[1544]: time="2024-12-13T01:18:44.572336822Z" level=info msg="shim disconnected" id=bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33 namespace=k8s.io Dec 13 01:18:44.572723 containerd[1544]: time="2024-12-13T01:18:44.572486024Z" level=warning msg="cleaning up after shim disconnected" id=bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33 namespace=k8s.io Dec 13 01:18:44.572723 containerd[1544]: time="2024-12-13T01:18:44.572499544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:44.593478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc04094640ce31062477ec3d1119d1eed269b519a99a68de7413edf07c9daa33-rootfs.mount: Deactivated successfully. Dec 13 01:18:45.490523 kubelet[2714]: E1213 01:18:45.488918 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:45.496267 containerd[1544]: time="2024-12-13T01:18:45.495367558Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:18:45.506645 containerd[1544]: time="2024-12-13T01:18:45.506205689Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97\"" Dec 13 01:18:45.507936 containerd[1544]: time="2024-12-13T01:18:45.507654301Z" level=info msg="StartContainer for \"ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97\"" Dec 13 01:18:45.558374 containerd[1544]: time="2024-12-13T01:18:45.558331924Z" level=info msg="StartContainer for \"ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97\" returns successfully" Dec 13 01:18:45.573100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97-rootfs.mount: Deactivated successfully. Dec 13 01:18:45.577186 containerd[1544]: time="2024-12-13T01:18:45.577132682Z" level=info msg="shim disconnected" id=ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97 namespace=k8s.io Dec 13 01:18:45.577186 containerd[1544]: time="2024-12-13T01:18:45.577183722Z" level=warning msg="cleaning up after shim disconnected" id=ac66e06c0a048a61661ee115725fee73fdabd7953d52fc496cfa86331e46db97 namespace=k8s.io Dec 13 01:18:45.577186 containerd[1544]: time="2024-12-13T01:18:45.577192042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:46.493510 kubelet[2714]: E1213 01:18:46.492736 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:46.496340 containerd[1544]: time="2024-12-13T01:18:46.496292893Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:18:46.521914 containerd[1544]: time="2024-12-13T01:18:46.521783938Z" level=info msg="CreateContainer within sandbox \"0840e35a07620d97c288459880df80896e426e109288c3e4cae8ba32c4977185\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1662a59f52e26eff1f3246b75cac8c19083fad24feb93d2f718ac07dee04179\"" Dec 13 01:18:46.522951 containerd[1544]: time="2024-12-13T01:18:46.522909667Z" level=info msg="StartContainer for \"d1662a59f52e26eff1f3246b75cac8c19083fad24feb93d2f718ac07dee04179\"" Dec 13 01:18:46.571955 containerd[1544]: time="2024-12-13T01:18:46.571816741Z" level=info msg="StartContainer for \"d1662a59f52e26eff1f3246b75cac8c19083fad24feb93d2f718ac07dee04179\" returns successfully" Dec 13 01:18:46.848526 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:18:47.499441 kubelet[2714]: E1213 01:18:47.499409 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:47.520992 kubelet[2714]: I1213 01:18:47.520940 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dsc45" podStartSLOduration=5.520902185 podStartE2EDuration="5.520902185s" podCreationTimestamp="2024-12-13 01:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:47.520116019 +0000 UTC m=+85.356128147" watchObservedRunningTime="2024-12-13 01:18:47.520902185 +0000 UTC m=+85.356914273" Dec 13 01:18:48.746783 kubelet[2714]: E1213 01:18:48.745661 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:49.593453 systemd-networkd[1230]: lxc_health: Link UP Dec 13 01:18:49.599501 systemd-networkd[1230]: lxc_health: Gained carrier Dec 13 01:18:50.638588 systemd-networkd[1230]: lxc_health: Gained IPv6LL Dec 13 01:18:50.747960 kubelet[2714]: E1213 01:18:50.747916 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:51.506293 kubelet[2714]: E1213 01:18:51.506251 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.507920 kubelet[2714]: E1213 01:18:52.507873 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:53.275867 kubelet[2714]: E1213 01:18:53.275830 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:55.276280 kubelet[2714]: E1213 01:18:55.276250 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:55.401734 sshd[4547]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:55.405491 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:34104.service: Deactivated successfully. Dec 13 01:18:55.407994 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:18:55.410024 systemd-logind[1523]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:18:55.411061 systemd-logind[1523]: Removed session 26.