Dec 13 01:16:49.923106 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:16:49.923127 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:16:49.923136 kernel: KASLR enabled Dec 13 01:16:49.923142 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:49.923148 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:16:49.923153 kernel: random: crng init done Dec 13 01:16:49.923160 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:49.923166 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:16:49.923172 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:16:49.923179 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923186 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923191 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923197 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923203 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923211 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923218 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923225 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923231 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:49.923238 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:16:49.923244 kernel: NUMA: Failed to initialise from firmware Dec 13 01:16:49.923250 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:49.923257 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Dec 13 01:16:49.923263 kernel: Zone ranges: Dec 13 01:16:49.923270 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:49.923276 kernel: DMA32 empty Dec 13 01:16:49.923283 kernel: Normal empty Dec 13 01:16:49.923289 kernel: Movable zone start for each node Dec 13 01:16:49.923295 kernel: Early memory node ranges Dec 13 01:16:49.923301 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:16:49.923308 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:16:49.923314 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:16:49.923321 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:16:49.923327 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:16:49.923333 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:16:49.923340 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:16:49.923353 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:16:49.923359 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:16:49.923367 kernel: psci: probing for conduit method from ACPI. Dec 13 01:16:49.923373 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:16:49.923379 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:16:49.923388 kernel: psci: Trusted OS migration not required Dec 13 01:16:49.923395 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:16:49.923402 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:16:49.923410 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:16:49.923417 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:16:49.923424 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:16:49.923434 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:16:49.923440 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:16:49.923447 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:16:49.923454 kernel: CPU features: detected: Spectre-v4 Dec 13 01:16:49.923460 kernel: CPU features: detected: Spectre-BHB Dec 13 01:16:49.923467 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:16:49.923474 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:16:49.923482 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:16:49.923489 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:16:49.923496 kernel: alternatives: applying boot alternatives Dec 13 01:16:49.923503 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:16:49.923511 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:49.923517 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:49.923535 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:16:49.923542 kernel: Fallback order for Node 0: 0 Dec 13 01:16:49.923549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:16:49.923556 kernel: Policy zone: DMA Dec 13 01:16:49.923563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:49.923571 kernel: software IO TLB: area num 4. Dec 13 01:16:49.923577 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:16:49.923585 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Dec 13 01:16:49.923592 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:16:49.923598 kernel: trace event string verifier disabled Dec 13 01:16:49.923605 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:49.923613 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:49.923620 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:16:49.923627 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:49.923634 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:49.923641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:49.923648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:16:49.923656 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:16:49.923663 kernel: GICv3: 256 SPIs implemented Dec 13 01:16:49.923669 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:16:49.923676 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:16:49.923683 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:16:49.923689 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:16:49.923696 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:16:49.923711 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:16:49.923718 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:16:49.923725 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:16:49.923732 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:16:49.923740 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:49.923746 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:49.923753 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:16:49.923760 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:16:49.923767 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:16:49.923773 kernel: arm-pv: using stolen time PV Dec 13 01:16:49.923780 kernel: Console: colour dummy device 80x25 Dec 13 01:16:49.923787 kernel: ACPI: Core revision 20230628 Dec 13 01:16:49.923794 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:16:49.923800 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:49.923809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:49.923816 kernel: landlock: Up and running. Dec 13 01:16:49.923823 kernel: SELinux: Initializing. Dec 13 01:16:49.923829 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:49.923836 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:49.923843 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:49.923850 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:49.923857 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:49.923864 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:49.923874 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:16:49.923882 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:16:49.923888 kernel: Remapping and enabling EFI services. Dec 13 01:16:49.923895 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:49.923902 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:16:49.923909 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:16:49.923916 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:16:49.923923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:49.923929 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:16:49.923936 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:16:49.923944 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:16:49.923952 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:16:49.923963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:49.923971 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:16:49.923978 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:16:49.923985 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:16:49.923993 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:16:49.924000 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:16:49.924007 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:16:49.924016 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:16:49.924023 kernel: SMP: Total of 4 processors activated. Dec 13 01:16:49.924030 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:16:49.924037 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:16:49.924044 kernel: CPU features: detected: Common not Private translations Dec 13 01:16:49.924052 kernel: CPU features: detected: CRC32 instructions Dec 13 01:16:49.924059 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:16:49.924066 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:16:49.924074 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:16:49.924082 kernel: CPU features: detected: Privileged Access Never Dec 13 01:16:49.924089 kernel: CPU features: detected: RAS Extension Support Dec 13 01:16:49.924096 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:16:49.924103 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:16:49.924111 kernel: alternatives: applying system-wide alternatives Dec 13 01:16:49.924118 kernel: devtmpfs: initialized Dec 13 01:16:49.924125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:49.924132 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:16:49.924141 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:49.924148 kernel: SMBIOS 3.0.0 present. Dec 13 01:16:49.924156 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:16:49.924163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:49.924170 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:16:49.924178 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:16:49.924185 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:16:49.924192 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:49.924199 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:49.924207 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:49.924215 kernel: cpuidle: using governor menu Dec 13 01:16:49.924222 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:16:49.924229 kernel: ASID allocator initialised with 32768 entries Dec 13 01:16:49.924236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:49.924248 kernel: Serial: AMBA PL011 UART driver Dec 13 01:16:49.924255 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:16:49.924262 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:16:49.924269 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:16:49.924278 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:49.924285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:49.924292 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:16:49.924300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:16:49.924307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:49.924314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:49.924321 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:16:49.924328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:16:49.924335 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:49.924347 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:49.924355 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:49.924362 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:49.924369 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:16:49.924377 kernel: ACPI: Interpreter enabled Dec 13 01:16:49.924384 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:16:49.924391 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:16:49.924398 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:16:49.924405 kernel: printk: console [ttyAMA0] enabled Dec 13 01:16:49.924415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:49.924540 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:49.924616 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:16:49.924684 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:16:49.924763 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:16:49.924844 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:16:49.924854 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:16:49.924864 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:49.924963 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:16:49.925029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:16:49.925090 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:16:49.925148 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:49.925228 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:16:49.925304 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:49.925396 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:16:49.925474 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:16:49.925542 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:16:49.925609 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:16:49.925675 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:16:49.925763 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:16:49.925825 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:16:49.925888 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:16:49.925948 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:16:49.925957 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:16:49.925965 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:16:49.925972 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:16:49.925979 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:16:49.925987 kernel: iommu: Default domain type: Translated Dec 13 01:16:49.925994 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:16:49.926003 kernel: efivars: Registered efivars operations Dec 13 01:16:49.926010 kernel: vgaarb: loaded Dec 13 01:16:49.926018 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:16:49.926025 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:49.926032 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:49.926039 kernel: pnp: PnP ACPI init Dec 13 01:16:49.926116 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:16:49.926126 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:16:49.926136 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:49.926143 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:49.926150 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:16:49.926158 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:49.926165 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:16:49.926172 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:16:49.926180 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:16:49.926187 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:49.926194 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:49.926203 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:49.926210 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:49.926217 kernel: kvm [1]: HYP mode not available Dec 13 01:16:49.926225 kernel: Initialise system trusted keyrings Dec 13 01:16:49.926232 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:16:49.926239 kernel: Key type asymmetric registered Dec 13 01:16:49.926246 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:49.926253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:16:49.926260 kernel: io scheduler mq-deadline registered Dec 13 01:16:49.926269 kernel: io scheduler kyber registered Dec 13 01:16:49.926276 kernel: io scheduler bfq registered Dec 13 01:16:49.926283 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:16:49.926290 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:16:49.926298 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:16:49.926377 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:16:49.926388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:49.926395 kernel: thunder_xcv, ver 1.0 Dec 13 01:16:49.926402 kernel: thunder_bgx, ver 1.0 Dec 13 01:16:49.926411 kernel: nicpf, ver 1.0 Dec 13 01:16:49.926418 kernel: nicvf, ver 1.0 Dec 13 01:16:49.926493 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:16:49.926557 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:16:49 UTC (1734052609) Dec 13 01:16:49.926566 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:16:49.926574 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:16:49.926581 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:16:49.926588 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:16:49.926598 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:49.926605 kernel: Segment Routing with IPv6 Dec 13 01:16:49.926612 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:49.926619 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:49.926627 kernel: Key type dns_resolver registered Dec 13 01:16:49.926634 kernel: registered taskstats version 1 Dec 13 01:16:49.926641 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:49.926648 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:16:49.926655 kernel: Key type .fscrypt registered Dec 13 01:16:49.926663 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:49.926671 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:16:49.926678 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:49.926685 kernel: ima: No architecture policies found Dec 13 01:16:49.926692 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:16:49.926712 kernel: clk: Disabling unused clocks Dec 13 01:16:49.926721 kernel: Freeing unused kernel memory: 39360K Dec 13 01:16:49.926729 kernel: Run /init as init process Dec 13 01:16:49.926736 kernel: with arguments: Dec 13 01:16:49.926745 kernel: /init Dec 13 01:16:49.926752 kernel: with environment: Dec 13 01:16:49.926759 kernel: HOME=/ Dec 13 01:16:49.926766 kernel: TERM=linux Dec 13 01:16:49.926773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:49.926782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:49.926791 systemd[1]: Detected virtualization kvm. Dec 13 01:16:49.926799 systemd[1]: Detected architecture arm64. Dec 13 01:16:49.926808 systemd[1]: Running in initrd. Dec 13 01:16:49.926815 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:49.926822 systemd[1]: Hostname set to . Dec 13 01:16:49.926830 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:49.926838 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:49.926846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:49.926853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:49.926861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:49.926870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:49.926878 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:49.926886 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:49.926895 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:49.926903 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:49.926911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:49.926919 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:49.926928 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:49.926936 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:49.926943 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:49.926951 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:49.926959 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:49.926966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:49.926974 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:49.926982 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:49.926991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:49.926999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:49.927006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:49.927014 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:49.927022 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:49.927030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:49.927037 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:49.927045 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:49.927053 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:49.927062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:49.927069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:49.927077 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:49.927085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:49.927093 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:49.927117 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:16:49.927137 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:49.927145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:49.927154 kernel: Bridge firewalling registered Dec 13 01:16:49.927162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:49.927170 systemd-journald[237]: Journal started Dec 13 01:16:49.927189 systemd-journald[237]: Runtime Journal (/run/log/journal/b8e71cd4055c4d1fb4888b8641a9a61a) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:16:49.911282 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:16:49.925392 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:16:49.931901 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:49.931362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:49.932996 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:49.936264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:49.937883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:49.940332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:49.942856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:49.950759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:49.953628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:49.956140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:49.958522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:49.975905 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:49.978087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:49.988090 dracut-cmdline[277]: dracut-dracut-053 Dec 13 01:16:49.990463 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:16:50.003929 systemd-resolved[279]: Positive Trust Anchors: Dec 13 01:16:50.003946 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:50.003976 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:50.008598 systemd-resolved[279]: Defaulting to hostname 'linux'. Dec 13 01:16:50.011617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:50.012785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:50.053734 kernel: SCSI subsystem initialized Dec 13 01:16:50.058722 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:50.065725 kernel: iscsi: registered transport (tcp) Dec 13 01:16:50.078728 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:50.078749 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:50.121126 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:50.129910 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:50.148476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:50.148515 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:50.149724 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:50.194735 kernel: raid6: neonx8 gen() 15755 MB/s Dec 13 01:16:50.211740 kernel: raid6: neonx4 gen() 15635 MB/s Dec 13 01:16:50.228732 kernel: raid6: neonx2 gen() 13233 MB/s Dec 13 01:16:50.245735 kernel: raid6: neonx1 gen() 10448 MB/s Dec 13 01:16:50.262735 kernel: raid6: int64x8 gen() 6949 MB/s Dec 13 01:16:50.279734 kernel: raid6: int64x4 gen() 7318 MB/s Dec 13 01:16:50.296732 kernel: raid6: int64x2 gen() 6115 MB/s Dec 13 01:16:50.313782 kernel: raid6: int64x1 gen() 5044 MB/s Dec 13 01:16:50.313797 kernel: raid6: using algorithm neonx8 gen() 15755 MB/s Dec 13 01:16:50.331749 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Dec 13 01:16:50.331781 kernel: raid6: using neon recovery algorithm Dec 13 01:16:50.337021 kernel: xor: measuring software checksum speed Dec 13 01:16:50.337050 kernel: 8regs : 19759 MB/sec Dec 13 01:16:50.337721 kernel: 32regs : 19655 MB/sec Dec 13 01:16:50.338880 kernel: arm64_neon : 23078 MB/sec Dec 13 01:16:50.338891 kernel: xor: using function: arm64_neon (23078 MB/sec) Dec 13 01:16:50.389741 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:50.402744 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:50.415866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:50.427040 systemd-udevd[462]: Using default interface naming scheme 'v255'. Dec 13 01:16:50.430138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:50.436845 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:50.448214 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 13 01:16:50.473228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:50.484821 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:50.524770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:50.531854 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:50.543068 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:50.545832 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:50.547324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:50.549384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:50.560888 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:50.572010 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:50.579491 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:16:50.585872 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:16:50.585983 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:50.585995 kernel: GPT:9289727 != 19775487 Dec 13 01:16:50.586004 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:50.586021 kernel: GPT:9289727 != 19775487 Dec 13 01:16:50.586030 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:50.586039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:50.582963 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:50.583088 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:50.586443 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:50.587903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:50.588041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:50.590939 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:50.602157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:50.606770 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (507) Dec 13 01:16:50.608738 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Dec 13 01:16:50.617747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:50.623562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:16:50.628224 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:16:50.632831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:50.636731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:16:50.637884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:16:50.645870 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:50.647648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:50.652691 disk-uuid[549]: Primary Header is updated. Dec 13 01:16:50.652691 disk-uuid[549]: Secondary Entries is updated. Dec 13 01:16:50.652691 disk-uuid[549]: Secondary Header is updated. Dec 13 01:16:50.656719 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:50.676215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:51.669464 disk-uuid[550]: The operation has completed successfully. Dec 13 01:16:51.670558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:51.690356 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:51.690457 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:51.717572 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:51.720320 sh[573]: Success Dec 13 01:16:51.729744 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:16:51.765183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:51.777983 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:51.779550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:51.790333 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:16:51.790366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:51.791442 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:51.791455 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:51.792837 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:51.796085 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:51.797372 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:51.805824 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:51.807267 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:51.814278 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:51.814309 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:51.814320 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:51.817725 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:51.824108 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:51.825942 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:51.831262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:51.836860 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:51.907936 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:51.917850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:51.936901 ignition[660]: Ignition 2.19.0 Dec 13 01:16:51.936911 ignition[660]: Stage: fetch-offline Dec 13 01:16:51.936947 ignition[660]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:51.936956 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:51.937173 ignition[660]: parsed url from cmdline: "" Dec 13 01:16:51.937176 ignition[660]: no config URL provided Dec 13 01:16:51.937180 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:51.937188 ignition[660]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:51.937210 ignition[660]: op(1): [started] loading QEMU firmware config module Dec 13 01:16:51.937215 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:16:51.944571 systemd-networkd[765]: lo: Link UP Dec 13 01:16:51.944582 systemd-networkd[765]: lo: Gained carrier Dec 13 01:16:51.945255 systemd-networkd[765]: Enumeration completed Dec 13 01:16:51.945526 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:51.945691 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:51.952654 ignition[660]: op(1): [finished] loading QEMU firmware config module Dec 13 01:16:51.945694 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:51.946563 systemd-networkd[765]: eth0: Link UP Dec 13 01:16:51.946566 systemd-networkd[765]: eth0: Gained carrier Dec 13 01:16:51.946573 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:51.950911 systemd[1]: Reached target network.target - Network. Dec 13 01:16:51.966757 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:51.996393 ignition[660]: parsing config with SHA512: 8883d982a2f5c6deb055ec8bb078cfe50bc87a987b88e87ecb65738b4f4d422f0247dc9b0ef24977dcd5f3b4894e0c4f0af0e8882fef43af7d6a06e6f24570ac Dec 13 01:16:52.000443 unknown[660]: fetched base config from "system" Dec 13 01:16:52.000452 unknown[660]: fetched user config from "qemu" Dec 13 01:16:52.001467 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.10 Dec 13 01:16:52.002079 ignition[660]: fetch-offline: fetch-offline passed Dec 13 01:16:52.001475 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Dec 13 01:16:52.002159 ignition[660]: Ignition finished successfully Dec 13 01:16:52.005596 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:52.007890 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:16:52.016914 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:52.026970 ignition[772]: Ignition 2.19.0 Dec 13 01:16:52.026980 ignition[772]: Stage: kargs Dec 13 01:16:52.027143 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:52.027152 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:52.028012 ignition[772]: kargs: kargs passed Dec 13 01:16:52.030593 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:52.028057 ignition[772]: Ignition finished successfully Dec 13 01:16:52.037877 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:52.047499 ignition[781]: Ignition 2.19.0 Dec 13 01:16:52.047508 ignition[781]: Stage: disks Dec 13 01:16:52.047670 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:52.050105 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:52.047681 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:52.051602 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:52.048583 ignition[781]: disks: disks passed Dec 13 01:16:52.053167 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:52.048628 ignition[781]: Ignition finished successfully Dec 13 01:16:52.055064 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:52.056823 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:52.058217 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:52.073846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:52.083534 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:16:52.086939 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:52.098805 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:52.139730 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:52.139733 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:52.140911 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:52.149801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:52.151482 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:52.153742 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:52.153787 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:52.159235 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Dec 13 01:16:52.153810 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:52.163510 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:52.163530 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:52.163540 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:52.158234 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:52.161832 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:52.167722 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:52.169319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:52.205524 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:52.209559 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:52.213721 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:52.217668 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:52.294776 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:52.307824 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:52.309422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:52.315720 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:52.331600 ignition[914]: INFO : Ignition 2.19.0 Dec 13 01:16:52.331600 ignition[914]: INFO : Stage: mount Dec 13 01:16:52.333847 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:52.333847 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:52.333847 ignition[914]: INFO : mount: mount passed Dec 13 01:16:52.333847 ignition[914]: INFO : Ignition finished successfully Dec 13 01:16:52.331619 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:52.333569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:52.339794 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:52.789177 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:52.801886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:52.810170 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Dec 13 01:16:52.810203 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:16:52.810214 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:16:52.811717 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:52.813719 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:52.814963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:52.834670 ignition[942]: INFO : Ignition 2.19.0 Dec 13 01:16:52.834670 ignition[942]: INFO : Stage: files Dec 13 01:16:52.836347 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:52.836347 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:52.836347 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:52.839876 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:52.839876 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:52.839876 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:52.839876 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:52.839876 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:52.839134 unknown[942]: wrote ssh authorized keys file for user: core Dec 13 01:16:52.847365 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:16:52.847365 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:16:52.902804 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:16:53.000551 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:16:53.002670 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:53.002670 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:16:53.312357 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:53.415581 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:16:53.417630 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:16:53.432876 systemd-networkd[765]: eth0: Gained IPv6LL Dec 13 01:16:53.729013 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:54.451175 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:16:54.451175 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:16:54.454756 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:54.487432 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:54.493072 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:54.494685 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:54.494685 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:54.494685 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:54.494685 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:54.494685 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:54.494685 ignition[942]: INFO : files: files passed Dec 13 01:16:54.494685 ignition[942]: INFO : Ignition finished successfully Dec 13 01:16:54.498024 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:54.507888 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:54.511040 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:54.512311 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:54.512402 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:54.519772 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:16:54.522040 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:54.522040 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:54.525438 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:54.526819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:54.528169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:54.537851 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:54.557247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:54.558299 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:54.559808 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:54.561841 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:54.563636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:54.571912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:54.584419 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:54.596990 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:54.606102 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:54.607319 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:54.609365 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:54.611085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:54.611212 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:54.613607 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:54.614737 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:54.616603 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:54.618421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:54.620121 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:54.621978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:54.623806 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:54.625836 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:54.627632 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:54.629588 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:54.631112 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:54.631244 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:54.633524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:54.635403 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:54.637238 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:54.637366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:54.639250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:54.639391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:54.641956 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:54.642080 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:54.644336 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:54.645869 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:54.646796 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:54.648925 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:54.650689 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:54.652420 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:54.652513 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:54.654285 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:54.654380 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:54.656488 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:54.656604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:54.658357 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:54.658462 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:54.666905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:54.668364 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:54.668503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:54.671601 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:54.673137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:54.673271 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:54.676516 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:54.679193 ignition[998]: INFO : Ignition 2.19.0 Dec 13 01:16:54.679193 ignition[998]: INFO : Stage: umount Dec 13 01:16:54.679193 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:54.679193 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:54.676695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:54.684163 ignition[998]: INFO : umount: umount passed Dec 13 01:16:54.684163 ignition[998]: INFO : Ignition finished successfully Dec 13 01:16:54.682687 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:54.682794 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:54.687139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:54.687623 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:54.687747 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:54.690631 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:54.691666 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:54.691746 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:54.693504 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:54.693553 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:54.695210 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:54.695259 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:54.696801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:54.696846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:54.698659 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:54.700405 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:54.706740 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 01:16:54.708149 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:54.708280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:54.710272 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:54.710403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:54.713247 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:54.713293 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:54.719797 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:54.720655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:54.720747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:54.722756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:54.722805 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:54.724676 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:54.724743 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:54.727009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:54.727056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:54.728975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:54.741919 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:54.742102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:54.744260 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:54.744423 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:54.749820 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:54.749898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:54.750991 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:54.751025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:54.752917 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:54.752971 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:54.755660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:54.755722 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:54.757525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:54.757572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:54.772876 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:54.773912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:54.773977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:54.776042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:54.776088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:54.778184 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:54.778269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:54.779898 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:54.779968 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:54.782250 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:54.783472 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:54.783539 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:54.786071 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:54.796134 systemd[1]: Switching root. Dec 13 01:16:54.830026 systemd-journald[237]: Journal stopped Dec 13 01:16:55.571469 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:55.571531 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:55.571544 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:55.571554 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:55.571564 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:55.571574 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:55.571588 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:55.571598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:55.571607 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:55.571619 kernel: audit: type=1403 audit(1734052614.988:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:55.571634 systemd[1]: Successfully loaded SELinux policy in 32.351ms. Dec 13 01:16:55.571654 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.075ms. Dec 13 01:16:55.571666 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:55.571677 systemd[1]: Detected virtualization kvm. Dec 13 01:16:55.571687 systemd[1]: Detected architecture arm64. Dec 13 01:16:55.571697 systemd[1]: Detected first boot. Dec 13 01:16:55.571720 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:55.571730 zram_generator::config[1042]: No configuration found. Dec 13 01:16:55.571744 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:55.571754 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:16:55.571765 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:16:55.571775 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:55.571788 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:55.571798 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:55.571809 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:55.571820 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:55.571832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:55.571843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:55.571854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:55.571879 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:55.571891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:55.571902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:55.571912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:55.571923 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:55.571934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:55.571948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:55.571960 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:16:55.571971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:55.571981 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:16:55.571992 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:16:55.572002 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:55.572014 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:55.572026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:55.572039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:55.572050 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:55.572061 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:55.572072 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:55.572083 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:55.572094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:55.572105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:55.572115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:55.572126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:55.572139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:55.572150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:55.572160 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:55.572170 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:55.572181 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:55.572191 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:55.572203 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:55.572214 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:55.572225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:55.572236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:55.572247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:55.572257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:55.572268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:55.572284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:55.572295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:55.572305 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:55.572316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:55.572334 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:55.572347 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:16:55.572357 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:16:55.572368 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:16:55.572378 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:16:55.572388 kernel: fuse: init (API version 7.39) Dec 13 01:16:55.572398 kernel: loop: module loaded Dec 13 01:16:55.572408 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:55.572418 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:55.572430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:55.572440 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:55.572451 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:55.572479 systemd-journald[1110]: Collecting audit messages is disabled. Dec 13 01:16:55.572501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:55.572512 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:16:55.572522 systemd-journald[1110]: Journal started Dec 13 01:16:55.572546 systemd-journald[1110]: Runtime Journal (/run/log/journal/b8e71cd4055c4d1fb4888b8641a9a61a) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:16:55.572584 systemd[1]: Stopped verity-setup.service. Dec 13 01:16:55.375025 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:55.391564 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:16:55.391931 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:16:55.577207 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:55.577881 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:55.578972 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:55.580140 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:55.581230 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:55.582388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:55.583579 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:55.584795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:55.586145 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:55.587587 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:55.587786 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:55.589178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:55.589317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:55.590621 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:55.590823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:55.592186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:55.592321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:55.593747 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:55.593878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:55.595218 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:55.596732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:55.597973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:55.599431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:55.600887 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:55.612571 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:55.620854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:55.624874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:55.627892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:55.627935 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:55.629933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:55.632270 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:55.634314 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:55.635456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:55.637013 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:55.638884 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:55.639995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:55.643865 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:55.645793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:55.646889 systemd-journald[1110]: Time spent on flushing to /var/log/journal/b8e71cd4055c4d1fb4888b8641a9a61a is 17.561ms for 858 entries. Dec 13 01:16:55.646889 systemd-journald[1110]: System Journal (/var/log/journal/b8e71cd4055c4d1fb4888b8641a9a61a) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:16:55.678019 systemd-journald[1110]: Received client request to flush runtime journal. Dec 13 01:16:55.678085 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:16:55.646928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:55.652895 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:55.656503 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:55.661759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:55.663368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:55.667060 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:55.668585 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:55.670321 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:55.674947 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:55.688737 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:55.692045 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:55.695787 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:55.698728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:55.701645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:55.709535 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:55.715005 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:55.721268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:55.728108 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 01:16:55.730048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:55.732919 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:16:55.750339 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:16:55.750356 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:16:55.754413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:55.773733 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:16:55.816738 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:16:55.822866 kernel: loop4: detected capacity change from 0 to 194096 Dec 13 01:16:55.829133 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:16:55.832993 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:16:55.833433 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 01:16:55.837062 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:55.837081 systemd[1]: Reloading... Dec 13 01:16:55.885771 zram_generator::config[1206]: No configuration found. Dec 13 01:16:55.934529 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:55.987867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:56.023860 systemd[1]: Reloading finished in 186 ms. Dec 13 01:16:56.053103 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:56.054774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:56.064973 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:56.067115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:56.076859 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:56.076875 systemd[1]: Reloading... Dec 13 01:16:56.086906 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:56.087254 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:56.087939 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:56.088145 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:16:56.088208 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 13 01:16:56.090751 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:56.090762 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:16:56.098310 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:56.098323 systemd-tmpfiles[1242]: Skipping /boot Dec 13 01:16:56.122805 zram_generator::config[1269]: No configuration found. Dec 13 01:16:56.207150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:56.243620 systemd[1]: Reloading finished in 166 ms. Dec 13 01:16:56.261781 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:56.269079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:56.276909 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:56.279767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:56.283047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:56.288987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:56.292061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:56.295012 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:56.298166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:56.303562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:56.308165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:56.311985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:56.313550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:56.314304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:56.315079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:56.317457 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:56.319315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:56.319468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:56.321896 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:56.322027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:56.330344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:56.330534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:56.341035 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:56.343827 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Dec 13 01:16:56.345018 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:56.347770 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:56.352166 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:56.353942 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:56.357042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:56.365106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:56.367697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:56.372110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:56.373138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:56.373260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:56.374103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:56.374261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:56.377690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:56.377844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:56.377978 augenrules[1340]: No rules Dec 13 01:16:56.380321 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:56.381881 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:56.383253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:56.386077 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:56.386210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:56.394938 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:56.399577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:56.411026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:56.417402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:56.418665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:56.420875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:56.421947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:56.425919 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:16:56.428353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:56.428822 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:56.428986 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:56.431058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:56.431190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:56.432764 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1362) Dec 13 01:16:56.439226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:56.448278 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:16:56.468738 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1354) Dec 13 01:16:56.471827 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1354) Dec 13 01:16:56.484916 systemd-resolved[1310]: Positive Trust Anchors: Dec 13 01:16:56.484935 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:56.484968 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:56.495347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:56.496335 systemd-resolved[1310]: Defaulting to hostname 'linux'. Dec 13 01:16:56.508031 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:56.509257 systemd-networkd[1379]: lo: Link UP Dec 13 01:16:56.509266 systemd-networkd[1379]: lo: Gained carrier Dec 13 01:16:56.510035 systemd-networkd[1379]: Enumeration completed Dec 13 01:16:56.510125 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:16:56.510508 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:56.510511 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:56.511723 systemd-networkd[1379]: eth0: Link UP Dec 13 01:16:56.511730 systemd-networkd[1379]: eth0: Gained carrier Dec 13 01:16:56.511743 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:56.511998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:56.513475 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:56.514928 systemd[1]: Reached target network.target - Network. Dec 13 01:16:56.515910 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:56.517179 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:56.524855 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:56.525900 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:56.526445 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Dec 13 01:16:56.527635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:56.105869 systemd-resolved[1310]: Clock change detected. Flushing caches. Dec 13 01:16:56.113097 systemd-journald[1110]: Time jumped backwards, rotating. Dec 13 01:16:56.106030 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:16:56.106078 systemd-timesyncd[1381]: Initial clock synchronization to Fri 2024-12-13 01:16:56.105813 UTC. Dec 13 01:16:56.142196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:56.152141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:56.154901 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:56.182923 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:56.202762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:56.222500 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:56.224031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:56.225098 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:56.226228 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:56.227434 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:56.228955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:56.230026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:56.231179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:56.232388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:56.232423 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:56.233270 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:56.235611 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:56.238044 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:56.250989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:56.253208 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:56.254698 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:56.255829 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:56.256811 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:56.257726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:56.257758 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:56.258710 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:56.260749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:56.262106 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:56.265061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:56.267885 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:56.272953 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:56.275112 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:56.278411 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:56.280519 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:56.281083 jq[1411]: false Dec 13 01:16:56.289120 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:56.291848 extend-filesystems[1412]: Found loop3 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found loop4 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found loop5 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda1 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda2 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda3 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found usr Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda4 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda6 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda7 Dec 13 01:16:56.292803 extend-filesystems[1412]: Found vda9 Dec 13 01:16:56.292803 extend-filesystems[1412]: Checking size of /dev/vda9 Dec 13 01:16:56.296959 dbus-daemon[1410]: [system] SELinux support is enabled Dec 13 01:16:56.293351 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:56.300596 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:16:56.301194 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:56.302594 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:56.307245 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:56.309058 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:56.313945 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:56.319190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:56.319376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:56.319691 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:56.319840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:56.321022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1365) Dec 13 01:16:56.325594 extend-filesystems[1412]: Resized partition /dev/vda9 Dec 13 01:16:56.326501 jq[1429]: true Dec 13 01:16:56.328315 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:56.328485 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:56.346286 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:56.348079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:56.348111 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:56.349387 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:56.349403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:56.352657 jq[1436]: true Dec 13 01:16:56.354850 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:56.366407 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:16:56.386167 tar[1435]: linux-arm64/helm Dec 13 01:16:56.392073 update_engine[1426]: I20241213 01:16:56.391834 1426 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:56.397385 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:16:56.397595 systemd-logind[1420]: New seat seat0. Dec 13 01:16:56.398253 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:56.401244 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:56.403735 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:16:56.404342 update_engine[1426]: I20241213 01:16:56.404287 1426 update_check_scheduler.cc:74] Next update check in 11m31s Dec 13 01:16:56.413105 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:56.425079 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:16:56.425079 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:16:56.425079 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:16:56.432582 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Dec 13 01:16:56.426695 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:56.426865 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:56.440598 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:56.442608 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:56.449066 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:16:56.462268 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:56.569512 containerd[1442]: time="2024-12-13T01:16:56.569420747Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:56.595374 containerd[1442]: time="2024-12-13T01:16:56.595283587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597534 containerd[1442]: time="2024-12-13T01:16:56.597495547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597534 containerd[1442]: time="2024-12-13T01:16:56.597530027Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:56.597628 containerd[1442]: time="2024-12-13T01:16:56.597546787Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:56.597733 containerd[1442]: time="2024-12-13T01:16:56.597711907Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:56.597761 containerd[1442]: time="2024-12-13T01:16:56.597735787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597807 containerd[1442]: time="2024-12-13T01:16:56.597791987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597827 containerd[1442]: time="2024-12-13T01:16:56.597807267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597997 containerd[1442]: time="2024-12-13T01:16:56.597976987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:56.597997 containerd[1442]: time="2024-12-13T01:16:56.597995387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598041 containerd[1442]: time="2024-12-13T01:16:56.598008707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598041 containerd[1442]: time="2024-12-13T01:16:56.598018187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598136 containerd[1442]: time="2024-12-13T01:16:56.598091187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598307 containerd[1442]: time="2024-12-13T01:16:56.598281387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598401 containerd[1442]: time="2024-12-13T01:16:56.598381947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:56.598426 containerd[1442]: time="2024-12-13T01:16:56.598399747Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:56.598494 containerd[1442]: time="2024-12-13T01:16:56.598471507Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:56.598528 containerd[1442]: time="2024-12-13T01:16:56.598515027Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:56.720454 containerd[1442]: time="2024-12-13T01:16:56.720310027Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:56.720454 containerd[1442]: time="2024-12-13T01:16:56.720368627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:56.720454 containerd[1442]: time="2024-12-13T01:16:56.720384347Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:56.720454 containerd[1442]: time="2024-12-13T01:16:56.720400027Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:56.720454 containerd[1442]: time="2024-12-13T01:16:56.720424787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:56.720624 containerd[1442]: time="2024-12-13T01:16:56.720602707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:56.720922 containerd[1442]: time="2024-12-13T01:16:56.720902507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:56.721116 containerd[1442]: time="2024-12-13T01:16:56.721097307Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:56.721153 containerd[1442]: time="2024-12-13T01:16:56.721121507Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:56.721153 containerd[1442]: time="2024-12-13T01:16:56.721135627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:56.721153 containerd[1442]: time="2024-12-13T01:16:56.721151227Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721218 containerd[1442]: time="2024-12-13T01:16:56.721164227Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721218 containerd[1442]: time="2024-12-13T01:16:56.721177067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721218 containerd[1442]: time="2024-12-13T01:16:56.721191067Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721218 containerd[1442]: time="2024-12-13T01:16:56.721205387Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721218 containerd[1442]: time="2024-12-13T01:16:56.721218387Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721304 containerd[1442]: time="2024-12-13T01:16:56.721231387Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721304 containerd[1442]: time="2024-12-13T01:16:56.721246467Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:56.721304 containerd[1442]: time="2024-12-13T01:16:56.721269707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721304 containerd[1442]: time="2024-12-13T01:16:56.721285187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721304 containerd[1442]: time="2024-12-13T01:16:56.721297867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721311067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721324067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721338107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721350427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721362867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721384 containerd[1442]: time="2024-12-13T01:16:56.721375507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721389067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721401147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721416667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721430267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721448947Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721471587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721483827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721514 containerd[1442]: time="2024-12-13T01:16:56.721494547Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:56.721650 containerd[1442]: time="2024-12-13T01:16:56.721628747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:56.721669 containerd[1442]: time="2024-12-13T01:16:56.721648627Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:56.721669 containerd[1442]: time="2024-12-13T01:16:56.721660307Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:56.721847 containerd[1442]: time="2024-12-13T01:16:56.721798747Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:56.721847 containerd[1442]: time="2024-12-13T01:16:56.721810747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.721847 containerd[1442]: time="2024-12-13T01:16:56.721826827Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:56.721847 containerd[1442]: time="2024-12-13T01:16:56.721836907Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:56.721956 containerd[1442]: time="2024-12-13T01:16:56.721850587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:56.722253 containerd[1442]: time="2024-12-13T01:16:56.722185707Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:56.722253 containerd[1442]: time="2024-12-13T01:16:56.722254947Z" level=info msg="Connect containerd service" Dec 13 01:16:56.722403 containerd[1442]: time="2024-12-13T01:16:56.722280747Z" level=info msg="using legacy CRI server" Dec 13 01:16:56.722403 containerd[1442]: time="2024-12-13T01:16:56.722287387Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:56.722403 containerd[1442]: time="2024-12-13T01:16:56.722363987Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:56.723157 containerd[1442]: time="2024-12-13T01:16:56.723128227Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:56.723353 containerd[1442]: time="2024-12-13T01:16:56.723323227Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:56.723400 containerd[1442]: time="2024-12-13T01:16:56.723374427Z" level=info msg="Start recovering state" Dec 13 01:16:56.723583 containerd[1442]: time="2024-12-13T01:16:56.723432867Z" level=info msg="Start event monitor" Dec 13 01:16:56.723583 containerd[1442]: time="2024-12-13T01:16:56.723446907Z" level=info msg="Start snapshots syncer" Dec 13 01:16:56.723583 containerd[1442]: time="2024-12-13T01:16:56.723457307Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:56.723583 containerd[1442]: time="2024-12-13T01:16:56.723465587Z" level=info msg="Start streaming server" Dec 13 01:16:56.724063 containerd[1442]: time="2024-12-13T01:16:56.724037987Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:56.724154 containerd[1442]: time="2024-12-13T01:16:56.724082787Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:56.726974 containerd[1442]: time="2024-12-13T01:16:56.725575507Z" level=info msg="containerd successfully booted in 0.159516s" Dec 13 01:16:56.724214 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:56.738472 tar[1435]: linux-arm64/LICENSE Dec 13 01:16:56.738656 tar[1435]: linux-arm64/README.md Dec 13 01:16:56.752617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:57.392149 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:57.412052 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:57.426173 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:57.432731 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:57.432909 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:57.435418 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:57.446387 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:57.449047 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:57.451075 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:16:57.452377 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:57.872111 systemd-networkd[1379]: eth0: Gained IPv6LL Dec 13 01:16:57.874168 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:57.876397 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:57.891219 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:16:57.893794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:57.895919 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:57.910622 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:16:57.910846 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:16:57.913037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:57.918437 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:58.397645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:58.399165 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:58.401558 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:58.402345 systemd[1]: Startup finished in 557ms (kernel) + 5.283s (initrd) + 3.872s (userspace) = 9.713s. Dec 13 01:16:58.870217 kubelet[1522]: E1213 01:16:58.870079 1522 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:58.872485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:58.872655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:02.171564 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:17:02.172674 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:59946.service - OpenSSH per-connection server daemon (10.0.0.1:59946). Dec 13 01:17:02.301563 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 59946 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:02.303627 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:02.317380 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:17:02.327164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:17:02.328996 systemd-logind[1420]: New session 1 of user core. Dec 13 01:17:02.337989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:17:02.340480 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:17:02.347174 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:17:02.443634 systemd[1541]: Queued start job for default target default.target. Dec 13 01:17:02.454964 systemd[1541]: Created slice app.slice - User Application Slice. Dec 13 01:17:02.455009 systemd[1541]: Reached target paths.target - Paths. Dec 13 01:17:02.455021 systemd[1541]: Reached target timers.target - Timers. Dec 13 01:17:02.456246 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:17:02.465592 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:17:02.465649 systemd[1541]: Reached target sockets.target - Sockets. Dec 13 01:17:02.465661 systemd[1541]: Reached target basic.target - Basic System. Dec 13 01:17:02.465694 systemd[1541]: Reached target default.target - Main User Target. Dec 13 01:17:02.465720 systemd[1541]: Startup finished in 113ms. Dec 13 01:17:02.465953 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:17:02.467361 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:17:02.525338 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:38484.service - OpenSSH per-connection server daemon (10.0.0.1:38484). Dec 13 01:17:02.560513 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 38484 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:02.561703 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:02.565856 systemd-logind[1420]: New session 2 of user core. Dec 13 01:17:02.573074 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:17:02.625013 sshd[1552]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:02.644730 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:38484.service: Deactivated successfully. Dec 13 01:17:02.646098 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:17:02.649081 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:17:02.649407 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:38492.service - OpenSSH per-connection server daemon (10.0.0.1:38492). Dec 13 01:17:02.650159 systemd-logind[1420]: Removed session 2. Dec 13 01:17:02.685204 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 38492 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:02.686395 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:02.689938 systemd-logind[1420]: New session 3 of user core. Dec 13 01:17:02.699080 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:17:02.746354 sshd[1559]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:02.759061 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:38492.service: Deactivated successfully. Dec 13 01:17:02.760314 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:17:02.761543 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:17:02.762638 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:38494.service - OpenSSH per-connection server daemon (10.0.0.1:38494). Dec 13 01:17:02.763447 systemd-logind[1420]: Removed session 3. Dec 13 01:17:02.797923 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 38494 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:02.799421 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:02.807022 systemd-logind[1420]: New session 4 of user core. Dec 13 01:17:02.820107 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:17:02.871441 sshd[1566]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:02.884240 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:38494.service: Deactivated successfully. Dec 13 01:17:02.885593 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:17:02.888041 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:17:02.889110 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:38510.service - OpenSSH per-connection server daemon (10.0.0.1:38510). Dec 13 01:17:02.889820 systemd-logind[1420]: Removed session 4. Dec 13 01:17:02.924835 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 38510 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:02.926056 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:02.930191 systemd-logind[1420]: New session 5 of user core. Dec 13 01:17:02.950109 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:17:03.024696 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:17:03.025004 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:17:03.038775 sudo[1576]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:03.041026 sshd[1573]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:03.047555 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:38510.service: Deactivated successfully. Dec 13 01:17:03.049230 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:17:03.052213 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:17:03.053725 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:38516.service - OpenSSH per-connection server daemon (10.0.0.1:38516). Dec 13 01:17:03.054472 systemd-logind[1420]: Removed session 5. Dec 13 01:17:03.090302 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 38516 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:03.091630 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:03.095974 systemd-logind[1420]: New session 6 of user core. Dec 13 01:17:03.113122 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:17:03.166681 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:17:03.167000 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:17:03.170467 sudo[1585]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:03.175142 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:17:03.175396 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:17:03.191354 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:17:03.192515 auditctl[1588]: No rules Dec 13 01:17:03.192820 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:17:03.193014 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:17:03.197440 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:17:03.233082 augenrules[1606]: No rules Dec 13 01:17:03.235015 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:17:03.236350 sudo[1584]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:03.238149 sshd[1581]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:03.250373 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:38516.service: Deactivated successfully. Dec 13 01:17:03.251718 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:17:03.252298 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:17:03.254001 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:38520.service - OpenSSH per-connection server daemon (10.0.0.1:38520). Dec 13 01:17:03.255626 systemd-logind[1420]: Removed session 6. Dec 13 01:17:03.289785 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 38520 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:17:03.291056 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:03.295201 systemd-logind[1420]: New session 7 of user core. Dec 13 01:17:03.305105 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:17:03.356902 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:17:03.357194 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:17:03.671197 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:17:03.671273 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:17:03.960990 dockerd[1635]: time="2024-12-13T01:17:03.960858387Z" level=info msg="Starting up" Dec 13 01:17:04.106082 dockerd[1635]: time="2024-12-13T01:17:04.106030267Z" level=info msg="Loading containers: start." Dec 13 01:17:04.211956 kernel: Initializing XFRM netlink socket Dec 13 01:17:04.276613 systemd-networkd[1379]: docker0: Link UP Dec 13 01:17:04.298129 dockerd[1635]: time="2024-12-13T01:17:04.298066507Z" level=info msg="Loading containers: done." Dec 13 01:17:04.313346 dockerd[1635]: time="2024-12-13T01:17:04.313225827Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:17:04.313346 dockerd[1635]: time="2024-12-13T01:17:04.313332307Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:17:04.313487 dockerd[1635]: time="2024-12-13T01:17:04.313432147Z" level=info msg="Daemon has completed initialization" Dec 13 01:17:04.339816 dockerd[1635]: time="2024-12-13T01:17:04.339645787Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:17:04.339855 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:17:05.047651 containerd[1442]: time="2024-12-13T01:17:05.047605867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:17:05.088863 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3222985260-merged.mount: Deactivated successfully. Dec 13 01:17:05.618391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540848693.mount: Deactivated successfully. Dec 13 01:17:07.618269 containerd[1442]: time="2024-12-13T01:17:07.618216827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.619753 containerd[1442]: time="2024-12-13T01:17:07.619678907Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Dec 13 01:17:07.620985 containerd[1442]: time="2024-12-13T01:17:07.620774347Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.623243 containerd[1442]: time="2024-12-13T01:17:07.623207507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:07.624414 containerd[1442]: time="2024-12-13T01:17:07.624380227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.5767328s" Dec 13 01:17:07.624450 containerd[1442]: time="2024-12-13T01:17:07.624418507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 01:17:07.644146 containerd[1442]: time="2024-12-13T01:17:07.644110267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:17:08.905162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:17:08.914183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:09.000591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:09.006921 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:17:09.048097 kubelet[1860]: E1213 01:17:09.048042 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:17:09.055100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:17:09.055353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:10.063388 containerd[1442]: time="2024-12-13T01:17:10.063333187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.064330 containerd[1442]: time="2024-12-13T01:17:10.064298307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Dec 13 01:17:10.065006 containerd[1442]: time="2024-12-13T01:17:10.064953587Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.067910 containerd[1442]: time="2024-12-13T01:17:10.067874787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:10.069980 containerd[1442]: time="2024-12-13T01:17:10.069949387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.42566804s" Dec 13 01:17:10.070036 containerd[1442]: time="2024-12-13T01:17:10.069986347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 01:17:10.088149 containerd[1442]: time="2024-12-13T01:17:10.088117427Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:17:11.414618 containerd[1442]: time="2024-12-13T01:17:11.414576667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:11.415592 containerd[1442]: time="2024-12-13T01:17:11.415064387Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Dec 13 01:17:11.416179 containerd[1442]: time="2024-12-13T01:17:11.416145147Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:11.419911 containerd[1442]: time="2024-12-13T01:17:11.419858947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:11.420920 containerd[1442]: time="2024-12-13T01:17:11.420885467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.33273044s" Dec 13 01:17:11.420920 containerd[1442]: time="2024-12-13T01:17:11.420919907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 01:17:11.440455 containerd[1442]: time="2024-12-13T01:17:11.440364827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:17:12.421908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246516274.mount: Deactivated successfully. Dec 13 01:17:12.612791 containerd[1442]: time="2024-12-13T01:17:12.612731667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.613205 containerd[1442]: time="2024-12-13T01:17:12.613165507Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Dec 13 01:17:12.614036 containerd[1442]: time="2024-12-13T01:17:12.613998987Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.616006 containerd[1442]: time="2024-12-13T01:17:12.615974507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:12.616645 containerd[1442]: time="2024-12-13T01:17:12.616604867Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.17620144s" Dec 13 01:17:12.616696 containerd[1442]: time="2024-12-13T01:17:12.616644907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 01:17:12.635072 containerd[1442]: time="2024-12-13T01:17:12.635037987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:17:13.130876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81414189.mount: Deactivated successfully. Dec 13 01:17:13.728682 containerd[1442]: time="2024-12-13T01:17:13.728624067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.729132 containerd[1442]: time="2024-12-13T01:17:13.729074907Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Dec 13 01:17:13.730013 containerd[1442]: time="2024-12-13T01:17:13.729978267Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.733016 containerd[1442]: time="2024-12-13T01:17:13.732981747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:13.734192 containerd[1442]: time="2024-12-13T01:17:13.734141227Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.09906568s" Dec 13 01:17:13.734192 containerd[1442]: time="2024-12-13T01:17:13.734184267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:17:13.751471 containerd[1442]: time="2024-12-13T01:17:13.751445627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:17:14.177485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656952139.mount: Deactivated successfully. Dec 13 01:17:14.182283 containerd[1442]: time="2024-12-13T01:17:14.182239267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.182813 containerd[1442]: time="2024-12-13T01:17:14.182783827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Dec 13 01:17:14.183640 containerd[1442]: time="2024-12-13T01:17:14.183607427Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.185611 containerd[1442]: time="2024-12-13T01:17:14.185582067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:14.186362 containerd[1442]: time="2024-12-13T01:17:14.186332507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 434.85496ms" Dec 13 01:17:14.186411 containerd[1442]: time="2024-12-13T01:17:14.186364187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:17:14.203869 containerd[1442]: time="2024-12-13T01:17:14.203793307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:17:14.672757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233900581.mount: Deactivated successfully. Dec 13 01:17:18.180788 containerd[1442]: time="2024-12-13T01:17:18.180613627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:18.181698 containerd[1442]: time="2024-12-13T01:17:18.181398507Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Dec 13 01:17:18.182419 containerd[1442]: time="2024-12-13T01:17:18.182384147Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:18.185421 containerd[1442]: time="2024-12-13T01:17:18.185388907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:18.186753 containerd[1442]: time="2024-12-13T01:17:18.186609227Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.98278444s" Dec 13 01:17:18.186753 containerd[1442]: time="2024-12-13T01:17:18.186645507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 01:17:19.155108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:17:19.169135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:19.263158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:19.265256 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:17:19.305294 kubelet[2092]: E1213 01:17:19.305198 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:17:19.307897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:17:19.308063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:17:24.761591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:24.772174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:24.790150 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-7.scope)... Dec 13 01:17:24.790167 systemd[1]: Reloading... Dec 13 01:17:24.847980 zram_generator::config[2146]: No configuration found. Dec 13 01:17:24.965608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:25.019677 systemd[1]: Reloading finished in 229 ms. Dec 13 01:17:25.068862 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:25.072163 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:25.072368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:25.073905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:25.168119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:25.172443 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:25.210886 kubelet[2193]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:25.210886 kubelet[2193]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:25.210886 kubelet[2193]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:25.211248 kubelet[2193]: I1213 01:17:25.211172 2193 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:25.833499 kubelet[2193]: I1213 01:17:25.833450 2193 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:17:25.833499 kubelet[2193]: I1213 01:17:25.833485 2193 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:25.833686 kubelet[2193]: I1213 01:17:25.833671 2193 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:17:25.880805 kubelet[2193]: I1213 01:17:25.880773 2193 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:25.880988 kubelet[2193]: E1213 01:17:25.880791 2193 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.890449 kubelet[2193]: I1213 01:17:25.890417 2193 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:25.890713 kubelet[2193]: I1213 01:17:25.890675 2193 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:25.890876 kubelet[2193]: I1213 01:17:25.890703 2193 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:25.890982 kubelet[2193]: I1213 01:17:25.890944 2193 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:25.890982 kubelet[2193]: I1213 01:17:25.890954 2193 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:25.891164 kubelet[2193]: I1213 01:17:25.891138 2193 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:25.892078 kubelet[2193]: I1213 01:17:25.892049 2193 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:17:25.892078 kubelet[2193]: I1213 01:17:25.892073 2193 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:25.892288 kubelet[2193]: I1213 01:17:25.892271 2193 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:25.892850 kubelet[2193]: I1213 01:17:25.892467 2193 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:25.893103 kubelet[2193]: W1213 01:17:25.892916 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.893103 kubelet[2193]: E1213 01:17:25.892993 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.893103 kubelet[2193]: W1213 01:17:25.893036 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.893103 kubelet[2193]: E1213 01:17:25.893078 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.895577 kubelet[2193]: I1213 01:17:25.895553 2193 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:25.896955 kubelet[2193]: I1213 01:17:25.896000 2193 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:25.896955 kubelet[2193]: W1213 01:17:25.896110 2193 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:17:25.896955 kubelet[2193]: I1213 01:17:25.896916 2193 server.go:1264] "Started kubelet" Dec 13 01:17:25.897479 kubelet[2193]: I1213 01:17:25.897423 2193 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:25.898271 kubelet[2193]: I1213 01:17:25.898214 2193 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:25.899154 kubelet[2193]: I1213 01:17:25.898576 2193 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:25.901736 kubelet[2193]: E1213 01:17:25.901514 2193 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097b1ac8292d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:25.896884947 +0000 UTC m=+0.721395001,LastTimestamp:2024-12-13 01:17:25.896884947 +0000 UTC m=+0.721395001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:17:25.904312 kubelet[2193]: I1213 01:17:25.902571 2193 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:25.904312 kubelet[2193]: I1213 01:17:25.902945 2193 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:17:25.904312 kubelet[2193]: E1213 01:17:25.903869 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:25.904312 kubelet[2193]: I1213 01:17:25.904036 2193 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:25.904312 kubelet[2193]: I1213 01:17:25.904154 2193 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:17:25.906522 kubelet[2193]: I1213 01:17:25.905419 2193 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:17:25.906522 kubelet[2193]: W1213 01:17:25.905813 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.906522 kubelet[2193]: E1213 01:17:25.905865 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.906786 kubelet[2193]: E1213 01:17:25.906748 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Dec 13 01:17:25.907604 kubelet[2193]: E1213 01:17:25.907580 2193 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:25.907729 kubelet[2193]: I1213 01:17:25.907704 2193 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:25.907810 kubelet[2193]: I1213 01:17:25.907789 2193 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:25.908614 kubelet[2193]: I1213 01:17:25.908594 2193 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:25.916530 kubelet[2193]: I1213 01:17:25.916488 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:25.917680 kubelet[2193]: I1213 01:17:25.917652 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:25.917819 kubelet[2193]: I1213 01:17:25.917803 2193 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:25.917890 kubelet[2193]: I1213 01:17:25.917878 2193 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:17:25.917960 kubelet[2193]: E1213 01:17:25.917921 2193 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:25.921378 kubelet[2193]: W1213 01:17:25.921333 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.921766 kubelet[2193]: E1213 01:17:25.921489 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:25.922786 kubelet[2193]: I1213 01:17:25.922768 2193 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:25.923114 kubelet[2193]: I1213 01:17:25.922959 2193 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:25.923114 kubelet[2193]: I1213 01:17:25.922984 2193 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:25.986453 kubelet[2193]: I1213 01:17:25.986389 2193 policy_none.go:49] "None policy: Start" Dec 13 01:17:25.987134 kubelet[2193]: I1213 01:17:25.987115 2193 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:25.987178 kubelet[2193]: I1213 01:17:25.987144 2193 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:25.994427 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:17:26.005328 kubelet[2193]: I1213 01:17:26.005283 2193 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:26.005665 kubelet[2193]: E1213 01:17:26.005568 2193 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 01:17:26.008924 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:17:26.011788 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:17:26.018658 kubelet[2193]: E1213 01:17:26.018623 2193 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:26.022200 kubelet[2193]: I1213 01:17:26.021686 2193 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:26.022200 kubelet[2193]: I1213 01:17:26.021881 2193 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:17:26.022200 kubelet[2193]: I1213 01:17:26.022025 2193 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:26.023313 kubelet[2193]: E1213 01:17:26.023289 2193 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:17:26.107718 kubelet[2193]: E1213 01:17:26.107602 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Dec 13 01:17:26.206836 kubelet[2193]: I1213 01:17:26.206796 2193 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:26.207129 kubelet[2193]: E1213 01:17:26.207106 2193 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 01:17:26.219302 kubelet[2193]: I1213 01:17:26.219267 2193 topology_manager.go:215] "Topology Admit Handler" podUID="6eea47c58f579c41628a5e61eb4baee8" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:26.220253 kubelet[2193]: I1213 01:17:26.220231 2193 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:26.221055 kubelet[2193]: I1213 01:17:26.221014 2193 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:26.226753 systemd[1]: Created slice kubepods-burstable-pod6eea47c58f579c41628a5e61eb4baee8.slice - libcontainer container kubepods-burstable-pod6eea47c58f579c41628a5e61eb4baee8.slice. Dec 13 01:17:26.249362 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:17:26.253327 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:17:26.306977 kubelet[2193]: I1213 01:17:26.306942 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:26.306977 kubelet[2193]: I1213 01:17:26.306978 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.307080 kubelet[2193]: I1213 01:17:26.306999 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.307080 kubelet[2193]: I1213 01:17:26.307014 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.307134 kubelet[2193]: I1213 01:17:26.307065 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.307134 kubelet[2193]: I1213 01:17:26.307107 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.307134 kubelet[2193]: I1213 01:17:26.307129 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:26.307200 kubelet[2193]: I1213 01:17:26.307159 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.307200 kubelet[2193]: I1213 01:17:26.307185 2193 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:26.508311 kubelet[2193]: E1213 01:17:26.508260 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Dec 13 01:17:26.546855 kubelet[2193]: E1213 01:17:26.546764 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:26.549080 containerd[1442]: time="2024-12-13T01:17:26.549042547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6eea47c58f579c41628a5e61eb4baee8,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:26.552622 kubelet[2193]: E1213 01:17:26.552596 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:26.553246 containerd[1442]: time="2024-12-13T01:17:26.552994387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:26.555609 kubelet[2193]: E1213 01:17:26.555303 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:26.555901 containerd[1442]: time="2024-12-13T01:17:26.555742227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:26.608372 kubelet[2193]: I1213 01:17:26.608332 2193 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:26.608701 kubelet[2193]: E1213 01:17:26.608657 2193 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 01:17:26.877682 kubelet[2193]: W1213 01:17:26.877532 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:26.877682 kubelet[2193]: E1213 01:17:26.877597 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.254513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884061499.mount: Deactivated successfully. Dec 13 01:17:27.263332 containerd[1442]: time="2024-12-13T01:17:27.262990947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:27.264584 containerd[1442]: time="2024-12-13T01:17:27.264475107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:27.268693 containerd[1442]: time="2024-12-13T01:17:27.268591267Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:27.269691 containerd[1442]: time="2024-12-13T01:17:27.269656827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:27.270037 containerd[1442]: time="2024-12-13T01:17:27.269946027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:17:27.270895 containerd[1442]: time="2024-12-13T01:17:27.270850307Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:27.271496 containerd[1442]: time="2024-12-13T01:17:27.271452547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:27.277634 containerd[1442]: time="2024-12-13T01:17:27.277586707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:27.278308 containerd[1442]: time="2024-12-13T01:17:27.278172507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 725.07456ms" Dec 13 01:17:27.278982 containerd[1442]: time="2024-12-13T01:17:27.278947947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 723.10784ms" Dec 13 01:17:27.279778 containerd[1442]: time="2024-12-13T01:17:27.279731747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 730.60936ms" Dec 13 01:17:27.311039 kubelet[2193]: E1213 01:17:27.310993 2193 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Dec 13 01:17:27.350554 kubelet[2193]: W1213 01:17:27.349417 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.350554 kubelet[2193]: E1213 01:17:27.349494 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.351219 kubelet[2193]: W1213 01:17:27.351191 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.351219 kubelet[2193]: E1213 01:17:27.351225 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.411451 kubelet[2193]: I1213 01:17:27.411356 2193 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:27.411754 kubelet[2193]: E1213 01:17:27.411685 2193 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 01:17:27.440366 containerd[1442]: time="2024-12-13T01:17:27.440288187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:27.440366 containerd[1442]: time="2024-12-13T01:17:27.440339467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:27.440366 containerd[1442]: time="2024-12-13T01:17:27.440217107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:27.440366 containerd[1442]: time="2024-12-13T01:17:27.440273947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:27.440366 containerd[1442]: time="2024-12-13T01:17:27.440297867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.440682 containerd[1442]: time="2024-12-13T01:17:27.440381507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.440682 containerd[1442]: time="2024-12-13T01:17:27.440492667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.441714 containerd[1442]: time="2024-12-13T01:17:27.441639987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.445789 containerd[1442]: time="2024-12-13T01:17:27.445417267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:27.445789 containerd[1442]: time="2024-12-13T01:17:27.445478027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:27.445789 containerd[1442]: time="2024-12-13T01:17:27.445488987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.445789 containerd[1442]: time="2024-12-13T01:17:27.445563987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:27.462556 kubelet[2193]: W1213 01:17:27.462477 2193 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.462556 kubelet[2193]: E1213 01:17:27.462544 2193 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 01:17:27.467163 systemd[1]: Started cri-containerd-195ec5ed541349053c4d89bbca2eb0b0d561d68833962357a2ce47e01840199e.scope - libcontainer container 195ec5ed541349053c4d89bbca2eb0b0d561d68833962357a2ce47e01840199e. Dec 13 01:17:27.468670 systemd[1]: Started cri-containerd-62c72294a69ea3ef9f1e2bc1e49f3067227c380009036eadae135e36fbefc1d0.scope - libcontainer container 62c72294a69ea3ef9f1e2bc1e49f3067227c380009036eadae135e36fbefc1d0. Dec 13 01:17:27.470108 systemd[1]: Started cri-containerd-f41dcff4ab385b93706bb27aa0ed41be2f195ae24e4e3376f07ded7203bcb6fa.scope - libcontainer container f41dcff4ab385b93706bb27aa0ed41be2f195ae24e4e3376f07ded7203bcb6fa. Dec 13 01:17:27.497869 containerd[1442]: time="2024-12-13T01:17:27.496803427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"195ec5ed541349053c4d89bbca2eb0b0d561d68833962357a2ce47e01840199e\"" Dec 13 01:17:27.498052 kubelet[2193]: E1213 01:17:27.497960 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.501116 containerd[1442]: time="2024-12-13T01:17:27.501079987Z" level=info msg="CreateContainer within sandbox \"195ec5ed541349053c4d89bbca2eb0b0d561d68833962357a2ce47e01840199e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:17:27.508528 containerd[1442]: time="2024-12-13T01:17:27.508313667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6eea47c58f579c41628a5e61eb4baee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"62c72294a69ea3ef9f1e2bc1e49f3067227c380009036eadae135e36fbefc1d0\"" Dec 13 01:17:27.509297 kubelet[2193]: E1213 01:17:27.509269 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.510051 containerd[1442]: time="2024-12-13T01:17:27.510014747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f41dcff4ab385b93706bb27aa0ed41be2f195ae24e4e3376f07ded7203bcb6fa\"" Dec 13 01:17:27.511204 kubelet[2193]: E1213 01:17:27.511174 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.512192 containerd[1442]: time="2024-12-13T01:17:27.512157547Z" level=info msg="CreateContainer within sandbox \"62c72294a69ea3ef9f1e2bc1e49f3067227c380009036eadae135e36fbefc1d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:17:27.513625 containerd[1442]: time="2024-12-13T01:17:27.513487467Z" level=info msg="CreateContainer within sandbox \"f41dcff4ab385b93706bb27aa0ed41be2f195ae24e4e3376f07ded7203bcb6fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:17:27.532107 containerd[1442]: time="2024-12-13T01:17:27.532053907Z" level=info msg="CreateContainer within sandbox \"195ec5ed541349053c4d89bbca2eb0b0d561d68833962357a2ce47e01840199e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"16112069d39504233a10515bca6ea23adc91d84e340751671e0f28c360e2404a\"" Dec 13 01:17:27.532944 containerd[1442]: time="2024-12-13T01:17:27.532876827Z" level=info msg="StartContainer for \"16112069d39504233a10515bca6ea23adc91d84e340751671e0f28c360e2404a\"" Dec 13 01:17:27.536419 containerd[1442]: time="2024-12-13T01:17:27.536338307Z" level=info msg="CreateContainer within sandbox \"62c72294a69ea3ef9f1e2bc1e49f3067227c380009036eadae135e36fbefc1d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47070cc56a46bbd81f7e7ccf4b0e44e2dd36a61e3770d5fe063a054fae6410e5\"" Dec 13 01:17:27.536861 containerd[1442]: time="2024-12-13T01:17:27.536835147Z" level=info msg="StartContainer for \"47070cc56a46bbd81f7e7ccf4b0e44e2dd36a61e3770d5fe063a054fae6410e5\"" Dec 13 01:17:27.537339 containerd[1442]: time="2024-12-13T01:17:27.537235667Z" level=info msg="CreateContainer within sandbox \"f41dcff4ab385b93706bb27aa0ed41be2f195ae24e4e3376f07ded7203bcb6fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"053694dd929b4719808c9bda5b4842cf4926450bb610d60e865bd2e380ac537d\"" Dec 13 01:17:27.537714 containerd[1442]: time="2024-12-13T01:17:27.537687867Z" level=info msg="StartContainer for \"053694dd929b4719808c9bda5b4842cf4926450bb610d60e865bd2e380ac537d\"" Dec 13 01:17:27.561108 systemd[1]: Started cri-containerd-16112069d39504233a10515bca6ea23adc91d84e340751671e0f28c360e2404a.scope - libcontainer container 16112069d39504233a10515bca6ea23adc91d84e340751671e0f28c360e2404a. Dec 13 01:17:27.564786 systemd[1]: Started cri-containerd-053694dd929b4719808c9bda5b4842cf4926450bb610d60e865bd2e380ac537d.scope - libcontainer container 053694dd929b4719808c9bda5b4842cf4926450bb610d60e865bd2e380ac537d. Dec 13 01:17:27.566378 systemd[1]: Started cri-containerd-47070cc56a46bbd81f7e7ccf4b0e44e2dd36a61e3770d5fe063a054fae6410e5.scope - libcontainer container 47070cc56a46bbd81f7e7ccf4b0e44e2dd36a61e3770d5fe063a054fae6410e5. Dec 13 01:17:27.610473 containerd[1442]: time="2024-12-13T01:17:27.610386387Z" level=info msg="StartContainer for \"16112069d39504233a10515bca6ea23adc91d84e340751671e0f28c360e2404a\" returns successfully" Dec 13 01:17:27.616795 containerd[1442]: time="2024-12-13T01:17:27.616561507Z" level=info msg="StartContainer for \"47070cc56a46bbd81f7e7ccf4b0e44e2dd36a61e3770d5fe063a054fae6410e5\" returns successfully" Dec 13 01:17:27.616795 containerd[1442]: time="2024-12-13T01:17:27.616565787Z" level=info msg="StartContainer for \"053694dd929b4719808c9bda5b4842cf4926450bb610d60e865bd2e380ac537d\" returns successfully" Dec 13 01:17:27.928147 kubelet[2193]: E1213 01:17:27.928118 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.930236 kubelet[2193]: E1213 01:17:27.930214 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.932522 kubelet[2193]: E1213 01:17:27.932499 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:28.937228 kubelet[2193]: E1213 01:17:28.937192 2193 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:29.013692 kubelet[2193]: I1213 01:17:29.013658 2193 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:29.035430 kubelet[2193]: E1213 01:17:29.035400 2193 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:17:29.158204 kubelet[2193]: I1213 01:17:29.158167 2193 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:29.173291 kubelet[2193]: E1213 01:17:29.173257 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.273896 kubelet[2193]: E1213 01:17:29.273585 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.376012 kubelet[2193]: E1213 01:17:29.375974 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.476193 kubelet[2193]: E1213 01:17:29.476094 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.577326 kubelet[2193]: E1213 01:17:29.577216 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.678154 kubelet[2193]: E1213 01:17:29.678099 2193 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:29.894711 kubelet[2193]: I1213 01:17:29.894613 2193 apiserver.go:52] "Watching apiserver" Dec 13 01:17:29.905070 kubelet[2193]: I1213 01:17:29.905014 2193 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:31.127904 systemd[1]: Reloading requested from client PID 2467 ('systemctl') (unit session-7.scope)... Dec 13 01:17:31.127920 systemd[1]: Reloading... Dec 13 01:17:31.189971 zram_generator::config[2506]: No configuration found. Dec 13 01:17:31.272560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:31.336688 systemd[1]: Reloading finished in 208 ms. Dec 13 01:17:31.370492 kubelet[2193]: I1213 01:17:31.370458 2193 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:31.370632 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:31.381406 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:31.382362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:31.382414 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 112.0M memory peak, 0B memory swap peak. Dec 13 01:17:31.390320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:31.479679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:31.483696 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:31.520715 kubelet[2548]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:31.520715 kubelet[2548]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:31.520715 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:31.521064 kubelet[2548]: I1213 01:17:31.520721 2548 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:31.524854 kubelet[2548]: I1213 01:17:31.524827 2548 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:17:31.526170 kubelet[2548]: I1213 01:17:31.524986 2548 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:31.526170 kubelet[2548]: I1213 01:17:31.525149 2548 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:17:31.526563 kubelet[2548]: I1213 01:17:31.526542 2548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:31.527963 kubelet[2548]: I1213 01:17:31.527829 2548 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:31.536642 kubelet[2548]: I1213 01:17:31.536595 2548 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:31.536795 kubelet[2548]: I1213 01:17:31.536767 2548 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:31.536970 kubelet[2548]: I1213 01:17:31.536792 2548 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:31.536970 kubelet[2548]: I1213 01:17:31.536973 2548 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:31.537080 kubelet[2548]: I1213 01:17:31.536982 2548 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:31.537080 kubelet[2548]: I1213 01:17:31.537014 2548 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:31.537512 kubelet[2548]: I1213 01:17:31.537118 2548 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:17:31.537512 kubelet[2548]: I1213 01:17:31.537131 2548 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:31.537512 kubelet[2548]: I1213 01:17:31.537158 2548 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:31.537512 kubelet[2548]: I1213 01:17:31.537173 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.537878 2548 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.538043 2548 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.538392 2548 server.go:1264] "Started kubelet" Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.539059 2548 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.539273 2548 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.539303 2548 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:31.540086 kubelet[2548]: I1213 01:17:31.539814 2548 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:31.540217 kubelet[2548]: I1213 01:17:31.540116 2548 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:17:31.541396 kubelet[2548]: I1213 01:17:31.541367 2548 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:31.541461 kubelet[2548]: I1213 01:17:31.541443 2548 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:17:31.541667 kubelet[2548]: I1213 01:17:31.541567 2548 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:17:31.543074 kubelet[2548]: I1213 01:17:31.543047 2548 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:31.543212 kubelet[2548]: E1213 01:17:31.543186 2548 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:31.543212 kubelet[2548]: I1213 01:17:31.543192 2548 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:31.545244 kubelet[2548]: I1213 01:17:31.544749 2548 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:31.558941 kubelet[2548]: I1213 01:17:31.557677 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:31.559048 kubelet[2548]: I1213 01:17:31.558967 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:31.559048 kubelet[2548]: I1213 01:17:31.558996 2548 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:31.559148 kubelet[2548]: I1213 01:17:31.559053 2548 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:17:31.559148 kubelet[2548]: E1213 01:17:31.559096 2548 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:31.586938 kubelet[2548]: I1213 01:17:31.586905 2548 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:31.586938 kubelet[2548]: I1213 01:17:31.586924 2548 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:31.587074 kubelet[2548]: I1213 01:17:31.586967 2548 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:31.587130 kubelet[2548]: I1213 01:17:31.587111 2548 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:31.587163 kubelet[2548]: I1213 01:17:31.587129 2548 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:31.587163 kubelet[2548]: I1213 01:17:31.587147 2548 policy_none.go:49] "None policy: Start" Dec 13 01:17:31.587730 kubelet[2548]: I1213 01:17:31.587690 2548 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:31.587730 kubelet[2548]: I1213 01:17:31.587712 2548 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:31.587886 kubelet[2548]: I1213 01:17:31.587865 2548 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:31.591249 kubelet[2548]: I1213 01:17:31.591226 2548 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:31.591411 kubelet[2548]: I1213 01:17:31.591377 2548 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:17:31.591509 kubelet[2548]: I1213 01:17:31.591498 2548 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:31.645816 kubelet[2548]: I1213 01:17:31.644834 2548 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:31.651980 kubelet[2548]: I1213 01:17:31.651950 2548 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:17:31.652042 kubelet[2548]: I1213 01:17:31.652023 2548 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:31.659349 kubelet[2548]: I1213 01:17:31.659233 2548 topology_manager.go:215] "Topology Admit Handler" podUID="6eea47c58f579c41628a5e61eb4baee8" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:31.659349 kubelet[2548]: I1213 01:17:31.659335 2548 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:31.659911 kubelet[2548]: I1213 01:17:31.659369 2548 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:31.842847 kubelet[2548]: I1213 01:17:31.842806 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:31.843086 kubelet[2548]: I1213 01:17:31.843062 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:31.843186 kubelet[2548]: I1213 01:17:31.843173 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:31.843270 kubelet[2548]: I1213 01:17:31.843258 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:31.843355 kubelet[2548]: I1213 01:17:31.843343 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:31.843423 kubelet[2548]: I1213 01:17:31.843412 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:31.843561 kubelet[2548]: I1213 01:17:31.843501 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eea47c58f579c41628a5e61eb4baee8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6eea47c58f579c41628a5e61eb4baee8\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:31.843561 kubelet[2548]: I1213 01:17:31.843523 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:31.843561 kubelet[2548]: I1213 01:17:31.843540 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:31.968342 kubelet[2548]: E1213 01:17:31.967856 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:31.968342 kubelet[2548]: E1213 01:17:31.968199 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:31.968514 kubelet[2548]: E1213 01:17:31.968485 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.127538 sudo[2584]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:17:32.127825 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:17:32.537878 kubelet[2548]: I1213 01:17:32.537839 2548 apiserver.go:52] "Watching apiserver" Dec 13 01:17:32.542305 kubelet[2548]: I1213 01:17:32.542261 2548 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:32.552176 sudo[2584]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:32.577412 kubelet[2548]: E1213 01:17:32.577306 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.579090 kubelet[2548]: E1213 01:17:32.578619 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.595944 kubelet[2548]: E1213 01:17:32.595897 2548 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:32.596392 kubelet[2548]: E1213 01:17:32.596369 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:32.617708 kubelet[2548]: I1213 01:17:32.617641 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.617624994 podStartE2EDuration="1.617624994s" podCreationTimestamp="2024-12-13 01:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:32.606596781 +0000 UTC m=+1.119057718" watchObservedRunningTime="2024-12-13 01:17:32.617624994 +0000 UTC m=+1.130085931" Dec 13 01:17:32.626543 kubelet[2548]: I1213 01:17:32.626491 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.626476364 podStartE2EDuration="1.626476364s" podCreationTimestamp="2024-12-13 01:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:32.626473884 +0000 UTC m=+1.138934821" watchObservedRunningTime="2024-12-13 01:17:32.626476364 +0000 UTC m=+1.138937301" Dec 13 01:17:32.626698 kubelet[2548]: I1213 01:17:32.626584 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.626579284 podStartE2EDuration="1.626579284s" podCreationTimestamp="2024-12-13 01:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:32.617819674 +0000 UTC m=+1.130280571" watchObservedRunningTime="2024-12-13 01:17:32.626579284 +0000 UTC m=+1.139040221" Dec 13 01:17:33.580457 kubelet[2548]: E1213 01:17:33.580419 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:34.830421 sudo[1617]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:34.831988 sshd[1614]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:34.835097 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:38520.service: Deactivated successfully. Dec 13 01:17:34.837010 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:17:34.837198 systemd[1]: session-7.scope: Consumed 9.427s CPU time, 191.2M memory peak, 0B memory swap peak. Dec 13 01:17:34.838317 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:17:34.839330 systemd-logind[1420]: Removed session 7. Dec 13 01:17:34.945683 kubelet[2548]: E1213 01:17:34.945640 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:38.223393 kubelet[2548]: E1213 01:17:38.223306 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:38.587095 kubelet[2548]: E1213 01:17:38.586833 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:39.179490 kubelet[2548]: E1213 01:17:39.179429 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:39.589207 kubelet[2548]: E1213 01:17:39.589079 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:41.689493 update_engine[1426]: I20241213 01:17:41.689399 1426 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:41.799994 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2631) Dec 13 01:17:41.839962 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2635) Dec 13 01:17:41.870944 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2635) Dec 13 01:17:44.953774 kubelet[2548]: E1213 01:17:44.953743 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.412918 kubelet[2548]: I1213 01:17:47.412877 2548 topology_manager.go:215] "Topology Admit Handler" podUID="982dd64f-0918-4e73-be73-f4a854ebf649" podNamespace="kube-system" podName="kube-proxy-8pdr4" Dec 13 01:17:47.424604 kubelet[2548]: I1213 01:17:47.424333 2548 topology_manager.go:215] "Topology Admit Handler" podUID="70d51665-1707-4644-9c11-f52421fd6553" podNamespace="kube-system" podName="cilium-sdc5s" Dec 13 01:17:47.425452 systemd[1]: Created slice kubepods-besteffort-pod982dd64f_0918_4e73_be73_f4a854ebf649.slice - libcontainer container kubepods-besteffort-pod982dd64f_0918_4e73_be73_f4a854ebf649.slice. Dec 13 01:17:47.433965 kubelet[2548]: I1213 01:17:47.433915 2548 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:47.441562 containerd[1442]: time="2024-12-13T01:17:47.441516771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:47.442969 kubelet[2548]: I1213 01:17:47.442650 2548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:47.451577 systemd[1]: Created slice kubepods-burstable-pod70d51665_1707_4644_9c11_f52421fd6553.slice - libcontainer container kubepods-burstable-pod70d51665_1707_4644_9c11_f52421fd6553.slice. Dec 13 01:17:47.540004 kubelet[2548]: I1213 01:17:47.539955 2548 topology_manager.go:215] "Topology Admit Handler" podUID="9f1ac749-793e-4cb4-8adc-abab15ea4dfd" podNamespace="kube-system" podName="cilium-operator-599987898-mh5gc" Dec 13 01:17:47.548887 systemd[1]: Created slice kubepods-besteffort-pod9f1ac749_793e_4cb4_8adc_abab15ea4dfd.slice - libcontainer container kubepods-besteffort-pod9f1ac749_793e_4cb4_8adc_abab15ea4dfd.slice. Dec 13 01:17:47.559913 kubelet[2548]: I1213 01:17:47.559850 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-bpf-maps\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.559913 kubelet[2548]: I1213 01:17:47.559887 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-lib-modules\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.559913 kubelet[2548]: I1213 01:17:47.559905 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/982dd64f-0918-4e73-be73-f4a854ebf649-kube-proxy\") pod \"kube-proxy-8pdr4\" (UID: \"982dd64f-0918-4e73-be73-f4a854ebf649\") " pod="kube-system/kube-proxy-8pdr4" Dec 13 01:17:47.559913 kubelet[2548]: I1213 01:17:47.559920 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-xtables-lock\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.559952 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-kernel\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.559969 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-hubble-tls\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.559987 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/982dd64f-0918-4e73-be73-f4a854ebf649-xtables-lock\") pod \"kube-proxy-8pdr4\" (UID: \"982dd64f-0918-4e73-be73-f4a854ebf649\") " pod="kube-system/kube-proxy-8pdr4" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.560006 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/982dd64f-0918-4e73-be73-f4a854ebf649-lib-modules\") pod \"kube-proxy-8pdr4\" (UID: \"982dd64f-0918-4e73-be73-f4a854ebf649\") " pod="kube-system/kube-proxy-8pdr4" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.560052 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-run\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560164 kubelet[2548]: I1213 01:17:47.560083 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-cgroup\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560286 kubelet[2548]: I1213 01:17:47.560105 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cni-path\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560286 kubelet[2548]: I1213 01:17:47.560132 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-net\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560286 kubelet[2548]: I1213 01:17:47.560160 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70d51665-1707-4644-9c11-f52421fd6553-cilium-config-path\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560286 kubelet[2548]: I1213 01:17:47.560195 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-etc-cni-netd\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560286 kubelet[2548]: I1213 01:17:47.560223 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70d51665-1707-4644-9c11-f52421fd6553-clustermesh-secrets\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560385 kubelet[2548]: I1213 01:17:47.560251 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrr2\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-kube-api-access-hbrr2\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560385 kubelet[2548]: I1213 01:17:47.560281 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-hostproc\") pod \"cilium-sdc5s\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " pod="kube-system/cilium-sdc5s" Dec 13 01:17:47.560385 kubelet[2548]: I1213 01:17:47.560302 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2cg\" (UniqueName: \"kubernetes.io/projected/982dd64f-0918-4e73-be73-f4a854ebf649-kube-api-access-gv2cg\") pod \"kube-proxy-8pdr4\" (UID: \"982dd64f-0918-4e73-be73-f4a854ebf649\") " pod="kube-system/kube-proxy-8pdr4" Dec 13 01:17:47.661319 kubelet[2548]: I1213 01:17:47.661221 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-cilium-config-path\") pod \"cilium-operator-599987898-mh5gc\" (UID: \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\") " pod="kube-system/cilium-operator-599987898-mh5gc" Dec 13 01:17:47.661445 kubelet[2548]: I1213 01:17:47.661363 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmb4\" (UniqueName: \"kubernetes.io/projected/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-kube-api-access-vrmb4\") pod \"cilium-operator-599987898-mh5gc\" (UID: \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\") " pod="kube-system/cilium-operator-599987898-mh5gc" Dec 13 01:17:47.742294 kubelet[2548]: E1213 01:17:47.742174 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.748578 containerd[1442]: time="2024-12-13T01:17:47.748529904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pdr4,Uid:982dd64f-0918-4e73-be73-f4a854ebf649,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:47.757181 kubelet[2548]: E1213 01:17:47.755917 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.757719 containerd[1442]: time="2024-12-13T01:17:47.757680388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdc5s,Uid:70d51665-1707-4644-9c11-f52421fd6553,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:47.774388 containerd[1442]: time="2024-12-13T01:17:47.773861675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:47.774388 containerd[1442]: time="2024-12-13T01:17:47.773941635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:47.774388 containerd[1442]: time="2024-12-13T01:17:47.773957475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.774388 containerd[1442]: time="2024-12-13T01:17:47.774043435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.782526 containerd[1442]: time="2024-12-13T01:17:47.782290679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:47.782805 containerd[1442]: time="2024-12-13T01:17:47.782747839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:47.782879 containerd[1442]: time="2024-12-13T01:17:47.782802159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.783239 containerd[1442]: time="2024-12-13T01:17:47.783162199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.801175 systemd[1]: Started cri-containerd-f237b858d746f1f2edfe622070a01d321c4d53af98e22827e5617f325dd917cb.scope - libcontainer container f237b858d746f1f2edfe622070a01d321c4d53af98e22827e5617f325dd917cb. Dec 13 01:17:47.803653 systemd[1]: Started cri-containerd-4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d.scope - libcontainer container 4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d. Dec 13 01:17:47.826323 containerd[1442]: time="2024-12-13T01:17:47.826257698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdc5s,Uid:70d51665-1707-4644-9c11-f52421fd6553,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\"" Dec 13 01:17:47.827288 containerd[1442]: time="2024-12-13T01:17:47.827251218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pdr4,Uid:982dd64f-0918-4e73-be73-f4a854ebf649,Namespace:kube-system,Attempt:0,} returns sandbox id \"f237b858d746f1f2edfe622070a01d321c4d53af98e22827e5617f325dd917cb\"" Dec 13 01:17:47.828805 kubelet[2548]: E1213 01:17:47.828781 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.829505 kubelet[2548]: E1213 01:17:47.829485 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.831600 containerd[1442]: time="2024-12-13T01:17:47.831556140Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:17:47.833684 containerd[1442]: time="2024-12-13T01:17:47.833654661Z" level=info msg="CreateContainer within sandbox \"f237b858d746f1f2edfe622070a01d321c4d53af98e22827e5617f325dd917cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:47.853068 kubelet[2548]: E1213 01:17:47.853037 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.853598 containerd[1442]: time="2024-12-13T01:17:47.853537950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mh5gc,Uid:9f1ac749-793e-4cb4-8adc-abab15ea4dfd,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:47.858148 containerd[1442]: time="2024-12-13T01:17:47.858109192Z" level=info msg="CreateContainer within sandbox \"f237b858d746f1f2edfe622070a01d321c4d53af98e22827e5617f325dd917cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0291d0acf7131124c53dce70763345fd67c487665c21657abf043284e16ba256\"" Dec 13 01:17:47.860604 containerd[1442]: time="2024-12-13T01:17:47.860568153Z" level=info msg="StartContainer for \"0291d0acf7131124c53dce70763345fd67c487665c21657abf043284e16ba256\"" Dec 13 01:17:47.878037 containerd[1442]: time="2024-12-13T01:17:47.877706440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:47.878037 containerd[1442]: time="2024-12-13T01:17:47.877782080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:47.878037 containerd[1442]: time="2024-12-13T01:17:47.877800880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.878193 containerd[1442]: time="2024-12-13T01:17:47.877893200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:47.887085 systemd[1]: Started cri-containerd-0291d0acf7131124c53dce70763345fd67c487665c21657abf043284e16ba256.scope - libcontainer container 0291d0acf7131124c53dce70763345fd67c487665c21657abf043284e16ba256. Dec 13 01:17:47.892777 systemd[1]: Started cri-containerd-d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2.scope - libcontainer container d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2. Dec 13 01:17:47.916453 containerd[1442]: time="2024-12-13T01:17:47.916404737Z" level=info msg="StartContainer for \"0291d0acf7131124c53dce70763345fd67c487665c21657abf043284e16ba256\" returns successfully" Dec 13 01:17:47.926250 containerd[1442]: time="2024-12-13T01:17:47.926128901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mh5gc,Uid:9f1ac749-793e-4cb4-8adc-abab15ea4dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\"" Dec 13 01:17:47.926871 kubelet[2548]: E1213 01:17:47.926707 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:48.613691 kubelet[2548]: E1213 01:17:48.613657 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:48.626503 kubelet[2548]: I1213 01:17:48.626452 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pdr4" podStartSLOduration=1.626436908 podStartE2EDuration="1.626436908s" podCreationTimestamp="2024-12-13 01:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:48.626104388 +0000 UTC m=+17.138565325" watchObservedRunningTime="2024-12-13 01:17:48.626436908 +0000 UTC m=+17.138897805" Dec 13 01:17:53.197429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674107012.mount: Deactivated successfully. Dec 13 01:17:54.410832 containerd[1442]: time="2024-12-13T01:17:54.410769495Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:54.411311 containerd[1442]: time="2024-12-13T01:17:54.411258295Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651550" Dec 13 01:17:54.412012 containerd[1442]: time="2024-12-13T01:17:54.411982696Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:54.414040 containerd[1442]: time="2024-12-13T01:17:54.414007736Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.582405516s" Dec 13 01:17:54.414077 containerd[1442]: time="2024-12-13T01:17:54.414043736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:17:54.416610 containerd[1442]: time="2024-12-13T01:17:54.416573177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:17:54.418584 containerd[1442]: time="2024-12-13T01:17:54.418531897Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:17:54.437189 containerd[1442]: time="2024-12-13T01:17:54.437137423Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\"" Dec 13 01:17:54.437989 containerd[1442]: time="2024-12-13T01:17:54.437956983Z" level=info msg="StartContainer for \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\"" Dec 13 01:17:54.455255 systemd[1]: run-containerd-runc-k8s.io-a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee-runc.q7cNzH.mount: Deactivated successfully. Dec 13 01:17:54.469119 systemd[1]: Started cri-containerd-a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee.scope - libcontainer container a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee. Dec 13 01:17:54.500528 containerd[1442]: time="2024-12-13T01:17:54.497405839Z" level=info msg="StartContainer for \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\" returns successfully" Dec 13 01:17:54.559870 systemd[1]: cri-containerd-a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee.scope: Deactivated successfully. Dec 13 01:17:54.628545 kubelet[2548]: E1213 01:17:54.628512 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:54.685656 containerd[1442]: time="2024-12-13T01:17:54.680833810Z" level=info msg="shim disconnected" id=a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee namespace=k8s.io Dec 13 01:17:54.685656 containerd[1442]: time="2024-12-13T01:17:54.685592811Z" level=warning msg="cleaning up after shim disconnected" id=a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee namespace=k8s.io Dec 13 01:17:54.685656 containerd[1442]: time="2024-12-13T01:17:54.685610211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:55.434422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee-rootfs.mount: Deactivated successfully. Dec 13 01:17:55.634416 kubelet[2548]: E1213 01:17:55.634349 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:55.637837 containerd[1442]: time="2024-12-13T01:17:55.637533863Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:17:55.648750 containerd[1442]: time="2024-12-13T01:17:55.648651186Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\"" Dec 13 01:17:55.649613 containerd[1442]: time="2024-12-13T01:17:55.649381066Z" level=info msg="StartContainer for \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\"" Dec 13 01:17:55.678097 systemd[1]: Started cri-containerd-f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5.scope - libcontainer container f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5. Dec 13 01:17:55.698115 containerd[1442]: time="2024-12-13T01:17:55.697923599Z" level=info msg="StartContainer for \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\" returns successfully" Dec 13 01:17:55.719706 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:17:55.719917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:55.720002 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:55.725229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:55.725416 systemd[1]: cri-containerd-f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5.scope: Deactivated successfully. Dec 13 01:17:55.743123 containerd[1442]: time="2024-12-13T01:17:55.743068010Z" level=info msg="shim disconnected" id=f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5 namespace=k8s.io Dec 13 01:17:55.743123 containerd[1442]: time="2024-12-13T01:17:55.743121250Z" level=warning msg="cleaning up after shim disconnected" id=f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5 namespace=k8s.io Dec 13 01:17:55.743123 containerd[1442]: time="2024-12-13T01:17:55.743132210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:55.754953 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:56.434219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5-rootfs.mount: Deactivated successfully. Dec 13 01:17:56.637261 kubelet[2548]: E1213 01:17:56.637221 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:56.641954 containerd[1442]: time="2024-12-13T01:17:56.641868953Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:17:56.657659 containerd[1442]: time="2024-12-13T01:17:56.657606156Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\"" Dec 13 01:17:56.658009 containerd[1442]: time="2024-12-13T01:17:56.657976556Z" level=info msg="StartContainer for \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\"" Dec 13 01:17:56.685113 systemd[1]: Started cri-containerd-13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475.scope - libcontainer container 13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475. Dec 13 01:17:56.708271 containerd[1442]: time="2024-12-13T01:17:56.708218929Z" level=info msg="StartContainer for \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\" returns successfully" Dec 13 01:17:56.723726 systemd[1]: cri-containerd-13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475.scope: Deactivated successfully. Dec 13 01:17:56.745403 containerd[1442]: time="2024-12-13T01:17:56.745308298Z" level=info msg="shim disconnected" id=13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475 namespace=k8s.io Dec 13 01:17:56.745403 containerd[1442]: time="2024-12-13T01:17:56.745384258Z" level=warning msg="cleaning up after shim disconnected" id=13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475 namespace=k8s.io Dec 13 01:17:56.745403 containerd[1442]: time="2024-12-13T01:17:56.745401978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:57.434262 systemd[1]: run-containerd-runc-k8s.io-13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475-runc.9kU90N.mount: Deactivated successfully. Dec 13 01:17:57.434349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475-rootfs.mount: Deactivated successfully. Dec 13 01:17:57.640457 kubelet[2548]: E1213 01:17:57.640410 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:57.643068 containerd[1442]: time="2024-12-13T01:17:57.643033506Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:17:57.688971 containerd[1442]: time="2024-12-13T01:17:57.688846996Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\"" Dec 13 01:17:57.689481 containerd[1442]: time="2024-12-13T01:17:57.689427716Z" level=info msg="StartContainer for \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\"" Dec 13 01:17:57.723237 systemd[1]: Started cri-containerd-c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e.scope - libcontainer container c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e. Dec 13 01:17:57.743672 systemd[1]: cri-containerd-c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e.scope: Deactivated successfully. Dec 13 01:17:57.765352 containerd[1442]: time="2024-12-13T01:17:57.765291254Z" level=info msg="StartContainer for \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\" returns successfully" Dec 13 01:17:57.769696 containerd[1442]: time="2024-12-13T01:17:57.769634774Z" level=info msg="shim disconnected" id=c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e namespace=k8s.io Dec 13 01:17:57.769696 containerd[1442]: time="2024-12-13T01:17:57.769685055Z" level=warning msg="cleaning up after shim disconnected" id=c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e namespace=k8s.io Dec 13 01:17:57.769696 containerd[1442]: time="2024-12-13T01:17:57.769693695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:57.992153 containerd[1442]: time="2024-12-13T01:17:57.992032145Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:57.998086 containerd[1442]: time="2024-12-13T01:17:57.998042226Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Dec 13 01:17:57.999004 containerd[1442]: time="2024-12-13T01:17:57.998969507Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:58.000403 containerd[1442]: time="2024-12-13T01:17:58.000272987Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.58365865s" Dec 13 01:17:58.000403 containerd[1442]: time="2024-12-13T01:17:58.000311067Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:17:58.002524 containerd[1442]: time="2024-12-13T01:17:58.002479107Z" level=info msg="CreateContainer within sandbox \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:17:58.011151 containerd[1442]: time="2024-12-13T01:17:58.011111589Z" level=info msg="CreateContainer within sandbox \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\"" Dec 13 01:17:58.012488 containerd[1442]: time="2024-12-13T01:17:58.011571189Z" level=info msg="StartContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\"" Dec 13 01:17:58.039162 systemd[1]: Started cri-containerd-8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037.scope - libcontainer container 8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037. Dec 13 01:17:58.058692 containerd[1442]: time="2024-12-13T01:17:58.058650879Z" level=info msg="StartContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" returns successfully" Dec 13 01:17:58.435261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e-rootfs.mount: Deactivated successfully. Dec 13 01:17:58.648283 kubelet[2548]: E1213 01:17:58.648253 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:58.651898 kubelet[2548]: E1213 01:17:58.651468 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:58.652010 containerd[1442]: time="2024-12-13T01:17:58.651830846Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:17:58.709804 containerd[1442]: time="2024-12-13T01:17:58.708160338Z" level=info msg="CreateContainer within sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\"" Dec 13 01:17:58.712049 containerd[1442]: time="2024-12-13T01:17:58.711825579Z" level=info msg="StartContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\"" Dec 13 01:17:58.748129 systemd[1]: Started cri-containerd-a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab.scope - libcontainer container a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab. Dec 13 01:17:58.776143 containerd[1442]: time="2024-12-13T01:17:58.776100072Z" level=info msg="StartContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" returns successfully" Dec 13 01:17:58.917252 kubelet[2548]: I1213 01:17:58.915751 2548 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:58.939078 kubelet[2548]: I1213 01:17:58.939011 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mh5gc" podStartSLOduration=1.871329944 podStartE2EDuration="11.938992347s" podCreationTimestamp="2024-12-13 01:17:47 +0000 UTC" firstStartedPulling="2024-12-13 01:17:47.933322944 +0000 UTC m=+16.445783881" lastFinishedPulling="2024-12-13 01:17:58.000985347 +0000 UTC m=+26.513446284" observedRunningTime="2024-12-13 01:17:58.707924778 +0000 UTC m=+27.220385675" watchObservedRunningTime="2024-12-13 01:17:58.938992347 +0000 UTC m=+27.451453284" Dec 13 01:17:58.939262 kubelet[2548]: I1213 01:17:58.939173 2548 topology_manager.go:215] "Topology Admit Handler" podUID="03976329-7fcf-47fe-8c2d-daacb501fe5a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5kttw" Dec 13 01:17:58.941908 kubelet[2548]: I1213 01:17:58.941703 2548 topology_manager.go:215] "Topology Admit Handler" podUID="a016b089-8bf9-48fe-b5ad-1e88b1f955f6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kvxrl" Dec 13 01:17:58.952622 systemd[1]: Created slice kubepods-burstable-pod03976329_7fcf_47fe_8c2d_daacb501fe5a.slice - libcontainer container kubepods-burstable-pod03976329_7fcf_47fe_8c2d_daacb501fe5a.slice. Dec 13 01:17:58.962841 systemd[1]: Created slice kubepods-burstable-poda016b089_8bf9_48fe_b5ad_1e88b1f955f6.slice - libcontainer container kubepods-burstable-poda016b089_8bf9_48fe_b5ad_1e88b1f955f6.slice. Dec 13 01:17:59.131540 kubelet[2548]: I1213 01:17:59.131493 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a016b089-8bf9-48fe-b5ad-1e88b1f955f6-config-volume\") pod \"coredns-7db6d8ff4d-kvxrl\" (UID: \"a016b089-8bf9-48fe-b5ad-1e88b1f955f6\") " pod="kube-system/coredns-7db6d8ff4d-kvxrl" Dec 13 01:17:59.131540 kubelet[2548]: I1213 01:17:59.131537 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrfw\" (UniqueName: \"kubernetes.io/projected/03976329-7fcf-47fe-8c2d-daacb501fe5a-kube-api-access-qvrfw\") pod \"coredns-7db6d8ff4d-5kttw\" (UID: \"03976329-7fcf-47fe-8c2d-daacb501fe5a\") " pod="kube-system/coredns-7db6d8ff4d-5kttw" Dec 13 01:17:59.134185 kubelet[2548]: I1213 01:17:59.131559 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpvx\" (UniqueName: \"kubernetes.io/projected/a016b089-8bf9-48fe-b5ad-1e88b1f955f6-kube-api-access-bmpvx\") pod \"coredns-7db6d8ff4d-kvxrl\" (UID: \"a016b089-8bf9-48fe-b5ad-1e88b1f955f6\") " pod="kube-system/coredns-7db6d8ff4d-kvxrl" Dec 13 01:17:59.134246 kubelet[2548]: I1213 01:17:59.134216 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03976329-7fcf-47fe-8c2d-daacb501fe5a-config-volume\") pod \"coredns-7db6d8ff4d-5kttw\" (UID: \"03976329-7fcf-47fe-8c2d-daacb501fe5a\") " pod="kube-system/coredns-7db6d8ff4d-5kttw" Dec 13 01:17:59.258304 kubelet[2548]: E1213 01:17:59.257635 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.259237 containerd[1442]: time="2024-12-13T01:17:59.258615452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5kttw,Uid:03976329-7fcf-47fe-8c2d-daacb501fe5a,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:59.266708 kubelet[2548]: E1213 01:17:59.266322 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.267575 containerd[1442]: time="2024-12-13T01:17:59.267487094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kvxrl,Uid:a016b089-8bf9-48fe-b5ad-1e88b1f955f6,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:59.656497 kubelet[2548]: E1213 01:17:59.656179 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.662556 kubelet[2548]: E1213 01:17:59.662528 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:59.670260 kubelet[2548]: I1213 01:17:59.670165 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sdc5s" podStartSLOduration=6.084457377 podStartE2EDuration="12.670153214s" podCreationTimestamp="2024-12-13 01:17:47 +0000 UTC" firstStartedPulling="2024-12-13 01:17:47.83069394 +0000 UTC m=+16.343154837" lastFinishedPulling="2024-12-13 01:17:54.416389737 +0000 UTC m=+22.928850674" observedRunningTime="2024-12-13 01:17:59.669043454 +0000 UTC m=+28.181504431" watchObservedRunningTime="2024-12-13 01:17:59.670153214 +0000 UTC m=+28.182614151" Dec 13 01:18:00.658358 kubelet[2548]: E1213 01:18:00.658159 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.659608 kubelet[2548]: E1213 01:18:01.659572 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:01.865433 systemd-networkd[1379]: cilium_host: Link UP Dec 13 01:18:01.865713 systemd-networkd[1379]: cilium_net: Link UP Dec 13 01:18:01.865956 systemd-networkd[1379]: cilium_net: Gained carrier Dec 13 01:18:01.866224 systemd-networkd[1379]: cilium_host: Gained carrier Dec 13 01:18:01.942290 systemd-networkd[1379]: cilium_vxlan: Link UP Dec 13 01:18:01.942296 systemd-networkd[1379]: cilium_vxlan: Gained carrier Dec 13 01:18:02.128200 systemd-networkd[1379]: cilium_net: Gained IPv6LL Dec 13 01:18:02.248010 kernel: NET: Registered PF_ALG protocol family Dec 13 01:18:02.512549 systemd-networkd[1379]: cilium_host: Gained IPv6LL Dec 13 01:18:02.814224 systemd-networkd[1379]: lxc_health: Link UP Dec 13 01:18:02.824197 systemd-networkd[1379]: lxc_health: Gained carrier Dec 13 01:18:03.164231 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Dec 13 01:18:03.213603 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:03.214845 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:03.220224 systemd-logind[1420]: New session 8 of user core. Dec 13 01:18:03.230357 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:18:03.370169 sshd[3748]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:03.372573 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:58234.service: Deactivated successfully. Dec 13 01:18:03.375039 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:18:03.377271 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:18:03.379189 systemd-logind[1420]: Removed session 8. Dec 13 01:18:03.409589 systemd-networkd[1379]: lxc0af908353e28: Link UP Dec 13 01:18:03.421226 kernel: eth0: renamed from tmpd37b4 Dec 13 01:18:03.438844 systemd-networkd[1379]: lxc0af908353e28: Gained carrier Dec 13 01:18:03.440028 kernel: eth0: renamed from tmp1ac37 Dec 13 01:18:03.448278 systemd-networkd[1379]: lxc356a80c7a00a: Link UP Dec 13 01:18:03.450975 systemd-networkd[1379]: lxc356a80c7a00a: Gained carrier Dec 13 01:18:03.776864 kubelet[2548]: E1213 01:18:03.776620 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:03.921104 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL Dec 13 01:18:04.048186 systemd-networkd[1379]: lxc_health: Gained IPv6LL Dec 13 01:18:04.496171 systemd-networkd[1379]: lxc356a80c7a00a: Gained IPv6LL Dec 13 01:18:04.665060 kubelet[2548]: E1213 01:18:04.665029 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:05.328167 systemd-networkd[1379]: lxc0af908353e28: Gained IPv6LL Dec 13 01:18:06.970441 containerd[1442]: time="2024-12-13T01:18:06.970329686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:06.970441 containerd[1442]: time="2024-12-13T01:18:06.970400606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:06.970441 containerd[1442]: time="2024-12-13T01:18:06.970416606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:06.971228 containerd[1442]: time="2024-12-13T01:18:06.970627966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:06.971228 containerd[1442]: time="2024-12-13T01:18:06.969769886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:06.971228 containerd[1442]: time="2024-12-13T01:18:06.970988366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:06.971228 containerd[1442]: time="2024-12-13T01:18:06.971001766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:06.971228 containerd[1442]: time="2024-12-13T01:18:06.971075566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:06.995093 systemd[1]: Started cri-containerd-1ac37ff509403bfac76ab00cc9efe04965633e6be5de5d50b9e8aae5801e68c4.scope - libcontainer container 1ac37ff509403bfac76ab00cc9efe04965633e6be5de5d50b9e8aae5801e68c4. Dec 13 01:18:06.996147 systemd[1]: Started cri-containerd-d37b478010fd630e851ae1f684b92662f7a9cd87dc2939859e5c1710f89c96eb.scope - libcontainer container d37b478010fd630e851ae1f684b92662f7a9cd87dc2939859e5c1710f89c96eb. Dec 13 01:18:07.007208 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:18:07.010684 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:18:07.026337 containerd[1442]: time="2024-12-13T01:18:07.026277933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kvxrl,Uid:a016b089-8bf9-48fe-b5ad-1e88b1f955f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ac37ff509403bfac76ab00cc9efe04965633e6be5de5d50b9e8aae5801e68c4\"" Dec 13 01:18:07.030884 kubelet[2548]: E1213 01:18:07.030646 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:07.033605 containerd[1442]: time="2024-12-13T01:18:07.033105734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5kttw,Uid:03976329-7fcf-47fe-8c2d-daacb501fe5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d37b478010fd630e851ae1f684b92662f7a9cd87dc2939859e5c1710f89c96eb\"" Dec 13 01:18:07.035146 kubelet[2548]: E1213 01:18:07.035121 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:07.036519 containerd[1442]: time="2024-12-13T01:18:07.036465414Z" level=info msg="CreateContainer within sandbox \"1ac37ff509403bfac76ab00cc9efe04965633e6be5de5d50b9e8aae5801e68c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:18:07.039283 containerd[1442]: time="2024-12-13T01:18:07.039246695Z" level=info msg="CreateContainer within sandbox \"d37b478010fd630e851ae1f684b92662f7a9cd87dc2939859e5c1710f89c96eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:18:07.050171 containerd[1442]: time="2024-12-13T01:18:07.050124056Z" level=info msg="CreateContainer within sandbox \"1ac37ff509403bfac76ab00cc9efe04965633e6be5de5d50b9e8aae5801e68c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc525533ef0a542954e5011536e6c36c8c8573f045660fcd4a7e4157dec1819e\"" Dec 13 01:18:07.050672 containerd[1442]: time="2024-12-13T01:18:07.050627696Z" level=info msg="StartContainer for \"fc525533ef0a542954e5011536e6c36c8c8573f045660fcd4a7e4157dec1819e\"" Dec 13 01:18:07.058932 containerd[1442]: time="2024-12-13T01:18:07.058886857Z" level=info msg="CreateContainer within sandbox \"d37b478010fd630e851ae1f684b92662f7a9cd87dc2939859e5c1710f89c96eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56fd748392c0d42959a6e0ba7c3016a88b45e7e0092bb3dc698d19ce90565cfb\"" Dec 13 01:18:07.060338 containerd[1442]: time="2024-12-13T01:18:07.059500777Z" level=info msg="StartContainer for \"56fd748392c0d42959a6e0ba7c3016a88b45e7e0092bb3dc698d19ce90565cfb\"" Dec 13 01:18:07.082126 systemd[1]: Started cri-containerd-fc525533ef0a542954e5011536e6c36c8c8573f045660fcd4a7e4157dec1819e.scope - libcontainer container fc525533ef0a542954e5011536e6c36c8c8573f045660fcd4a7e4157dec1819e. Dec 13 01:18:07.084981 systemd[1]: Started cri-containerd-56fd748392c0d42959a6e0ba7c3016a88b45e7e0092bb3dc698d19ce90565cfb.scope - libcontainer container 56fd748392c0d42959a6e0ba7c3016a88b45e7e0092bb3dc698d19ce90565cfb. Dec 13 01:18:07.108303 containerd[1442]: time="2024-12-13T01:18:07.108263143Z" level=info msg="StartContainer for \"fc525533ef0a542954e5011536e6c36c8c8573f045660fcd4a7e4157dec1819e\" returns successfully" Dec 13 01:18:07.122112 containerd[1442]: time="2024-12-13T01:18:07.119143704Z" level=info msg="StartContainer for \"56fd748392c0d42959a6e0ba7c3016a88b45e7e0092bb3dc698d19ce90565cfb\" returns successfully" Dec 13 01:18:07.670940 kubelet[2548]: E1213 01:18:07.670900 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:07.672879 kubelet[2548]: E1213 01:18:07.672799 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:07.683364 kubelet[2548]: I1213 01:18:07.683304 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kvxrl" podStartSLOduration=20.683292772 podStartE2EDuration="20.683292772s" podCreationTimestamp="2024-12-13 01:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:07.680461291 +0000 UTC m=+36.192922228" watchObservedRunningTime="2024-12-13 01:18:07.683292772 +0000 UTC m=+36.195753709" Dec 13 01:18:07.692568 kubelet[2548]: I1213 01:18:07.691807 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5kttw" podStartSLOduration=20.691792533 podStartE2EDuration="20.691792533s" podCreationTimestamp="2024-12-13 01:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:07.691432253 +0000 UTC m=+36.203893150" watchObservedRunningTime="2024-12-13 01:18:07.691792533 +0000 UTC m=+36.204253470" Dec 13 01:18:08.380480 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:58240.service - OpenSSH per-connection server daemon (10.0.0.1:58240). Dec 13 01:18:08.420884 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 58240 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:08.422265 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.428175 systemd-logind[1420]: New session 9 of user core. Dec 13 01:18:08.440102 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:18:08.555523 sshd[3963]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.559020 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:58240.service: Deactivated successfully. Dec 13 01:18:08.560708 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:18:08.561376 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:18:08.562170 systemd-logind[1420]: Removed session 9. Dec 13 01:18:08.673700 kubelet[2548]: E1213 01:18:08.673586 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:08.673700 kubelet[2548]: E1213 01:18:08.673652 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:09.675199 kubelet[2548]: E1213 01:18:09.675139 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:09.675562 kubelet[2548]: E1213 01:18:09.675412 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:13.571551 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:49346.service - OpenSSH per-connection server daemon (10.0.0.1:49346). Dec 13 01:18:13.609491 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 49346 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:13.610874 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:13.614810 systemd-logind[1420]: New session 10 of user core. Dec 13 01:18:13.623089 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:18:13.732642 sshd[3980]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:13.736269 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:49346.service: Deactivated successfully. Dec 13 01:18:13.737904 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:18:13.738519 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:18:13.739443 systemd-logind[1420]: Removed session 10. Dec 13 01:18:18.743805 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:49356.service - OpenSSH per-connection server daemon (10.0.0.1:49356). Dec 13 01:18:18.782418 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 49356 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:18.783726 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:18.787787 systemd-logind[1420]: New session 11 of user core. Dec 13 01:18:18.794070 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:18:18.926296 sshd[3998]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:18.936424 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:49356.service: Deactivated successfully. Dec 13 01:18:18.937807 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:18:18.939393 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:18:18.949181 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). Dec 13 01:18:18.949972 systemd-logind[1420]: Removed session 11. Dec 13 01:18:18.985992 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:18.987244 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:18.991768 systemd-logind[1420]: New session 12 of user core. Dec 13 01:18:19.003074 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:18:19.159785 sshd[4013]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:19.170160 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:49368.service - OpenSSH per-connection server daemon (10.0.0.1:49368). Dec 13 01:18:19.174156 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:49364.service: Deactivated successfully. Dec 13 01:18:19.175668 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:18:19.181788 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:18:19.189215 systemd-logind[1420]: Removed session 12. Dec 13 01:18:19.217173 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 49368 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:19.218590 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:19.222775 systemd-logind[1420]: New session 13 of user core. Dec 13 01:18:19.234077 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:18:19.344540 sshd[4023]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:19.347774 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:49368.service: Deactivated successfully. Dec 13 01:18:19.349440 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:18:19.350185 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:18:19.351225 systemd-logind[1420]: Removed session 13. Dec 13 01:18:24.356123 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Dec 13 01:18:24.392161 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:24.393366 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:24.397782 systemd-logind[1420]: New session 14 of user core. Dec 13 01:18:24.405100 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:18:24.520443 sshd[4040]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:24.524953 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:18:24.525157 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:36910.service: Deactivated successfully. Dec 13 01:18:24.527401 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:18:24.529363 systemd-logind[1420]: Removed session 14. Dec 13 01:18:29.530717 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:36924.service - OpenSSH per-connection server daemon (10.0.0.1:36924). Dec 13 01:18:29.568583 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:29.569908 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:29.575121 systemd-logind[1420]: New session 15 of user core. Dec 13 01:18:29.589172 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:18:29.698428 sshd[4054]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:29.709415 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:36924.service: Deactivated successfully. Dec 13 01:18:29.710852 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:18:29.711531 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:18:29.713276 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:36932.service - OpenSSH per-connection server daemon (10.0.0.1:36932). Dec 13 01:18:29.714420 systemd-logind[1420]: Removed session 15. Dec 13 01:18:29.749818 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 36932 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:29.751076 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:29.754617 systemd-logind[1420]: New session 16 of user core. Dec 13 01:18:29.767080 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:18:29.964186 sshd[4068]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:29.971544 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:36932.service: Deactivated successfully. Dec 13 01:18:29.972991 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:18:29.974836 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:18:29.976179 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Dec 13 01:18:29.977217 systemd-logind[1420]: Removed session 16. Dec 13 01:18:30.027013 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:30.028628 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:30.032389 systemd-logind[1420]: New session 17 of user core. Dec 13 01:18:30.042105 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:31.292274 sshd[4080]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:31.301309 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:36946.service: Deactivated successfully. Dec 13 01:18:31.303566 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:31.309684 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:31.317977 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:36952.service - OpenSSH per-connection server daemon (10.0.0.1:36952). Dec 13 01:18:31.320622 systemd-logind[1420]: Removed session 17. Dec 13 01:18:31.351878 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 36952 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:31.353426 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:31.357290 systemd-logind[1420]: New session 18 of user core. Dec 13 01:18:31.372092 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:31.583247 sshd[4102]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:31.593438 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:36952.service: Deactivated successfully. Dec 13 01:18:31.594859 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:31.595468 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:31.606455 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:36968.service - OpenSSH per-connection server daemon (10.0.0.1:36968). Dec 13 01:18:31.607475 systemd-logind[1420]: Removed session 18. Dec 13 01:18:31.640599 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 36968 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:31.642021 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:31.646119 systemd-logind[1420]: New session 19 of user core. Dec 13 01:18:31.656145 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:31.762639 sshd[4117]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:31.766086 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:36968.service: Deactivated successfully. Dec 13 01:18:31.768652 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:31.769329 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:31.770354 systemd-logind[1420]: Removed session 19. Dec 13 01:18:36.773515 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:52066.service - OpenSSH per-connection server daemon (10.0.0.1:52066). Dec 13 01:18:36.813260 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 52066 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:36.814683 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:36.819072 systemd-logind[1420]: New session 20 of user core. Dec 13 01:18:36.826141 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:36.928891 sshd[4134]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:36.932741 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:52066.service: Deactivated successfully. Dec 13 01:18:36.934611 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:36.935325 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:36.936239 systemd-logind[1420]: Removed session 20. Dec 13 01:18:41.939501 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:52076.service - OpenSSH per-connection server daemon (10.0.0.1:52076). Dec 13 01:18:41.976643 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 52076 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:41.977818 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:41.981751 systemd-logind[1420]: New session 21 of user core. Dec 13 01:18:41.992122 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:42.098099 sshd[4148]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:42.101346 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:52076.service: Deactivated successfully. Dec 13 01:18:42.103120 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:42.104563 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:42.106063 systemd-logind[1420]: Removed session 21. Dec 13 01:18:47.109644 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:34178.service - OpenSSH per-connection server daemon (10.0.0.1:34178). Dec 13 01:18:47.159470 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 34178 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:47.160826 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:47.165053 systemd-logind[1420]: New session 22 of user core. Dec 13 01:18:47.178134 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:18:47.315487 sshd[4163]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:47.333086 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:34178.service: Deactivated successfully. Dec 13 01:18:47.334515 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:18:47.335836 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:18:47.348300 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:34190.service - OpenSSH per-connection server daemon (10.0.0.1:34190). Dec 13 01:18:47.349385 systemd-logind[1420]: Removed session 22. Dec 13 01:18:47.398210 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 34190 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:47.399476 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:47.406350 systemd-logind[1420]: New session 23 of user core. Dec 13 01:18:47.415135 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:18:49.490604 containerd[1442]: time="2024-12-13T01:18:49.490503866Z" level=info msg="StopContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" with timeout 30 (s)" Dec 13 01:18:49.492157 containerd[1442]: time="2024-12-13T01:18:49.491734591Z" level=info msg="Stop container \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" with signal terminated" Dec 13 01:18:49.500322 systemd[1]: cri-containerd-8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037.scope: Deactivated successfully. Dec 13 01:18:49.519675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037-rootfs.mount: Deactivated successfully. Dec 13 01:18:49.521542 containerd[1442]: time="2024-12-13T01:18:49.521503794Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:18:49.528323 containerd[1442]: time="2024-12-13T01:18:49.528275182Z" level=info msg="StopContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" with timeout 2 (s)" Dec 13 01:18:49.528629 containerd[1442]: time="2024-12-13T01:18:49.528593543Z" level=info msg="Stop container \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" with signal terminated" Dec 13 01:18:49.534517 systemd-networkd[1379]: lxc_health: Link DOWN Dec 13 01:18:49.534524 systemd-networkd[1379]: lxc_health: Lost carrier Dec 13 01:18:49.536683 containerd[1442]: time="2024-12-13T01:18:49.536613496Z" level=info msg="shim disconnected" id=8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037 namespace=k8s.io Dec 13 01:18:49.536683 containerd[1442]: time="2024-12-13T01:18:49.536674897Z" level=warning msg="cleaning up after shim disconnected" id=8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037 namespace=k8s.io Dec 13 01:18:49.536683 containerd[1442]: time="2024-12-13T01:18:49.536685337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:49.562114 systemd[1]: cri-containerd-a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab.scope: Deactivated successfully. Dec 13 01:18:49.562379 systemd[1]: cri-containerd-a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab.scope: Consumed 6.442s CPU time. Dec 13 01:18:49.579347 containerd[1442]: time="2024-12-13T01:18:49.577434065Z" level=info msg="StopContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" returns successfully" Dec 13 01:18:49.579237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab-rootfs.mount: Deactivated successfully. Dec 13 01:18:49.587117 containerd[1442]: time="2024-12-13T01:18:49.586910504Z" level=info msg="StopPodSandbox for \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\"" Dec 13 01:18:49.587117 containerd[1442]: time="2024-12-13T01:18:49.587003745Z" level=info msg="Container to stop \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.588587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2-shm.mount: Deactivated successfully. Dec 13 01:18:49.590006 containerd[1442]: time="2024-12-13T01:18:49.589954517Z" level=info msg="shim disconnected" id=a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab namespace=k8s.io Dec 13 01:18:49.590196 containerd[1442]: time="2024-12-13T01:18:49.590112678Z" level=warning msg="cleaning up after shim disconnected" id=a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab namespace=k8s.io Dec 13 01:18:49.590196 containerd[1442]: time="2024-12-13T01:18:49.590144198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:49.593088 systemd[1]: cri-containerd-d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2.scope: Deactivated successfully. Dec 13 01:18:49.605432 containerd[1442]: time="2024-12-13T01:18:49.605306621Z" level=info msg="StopContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" returns successfully" Dec 13 01:18:49.605807 containerd[1442]: time="2024-12-13T01:18:49.605757462Z" level=info msg="StopPodSandbox for \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\"" Dec 13 01:18:49.605807 containerd[1442]: time="2024-12-13T01:18:49.605798543Z" level=info msg="Container to stop \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.605870 containerd[1442]: time="2024-12-13T01:18:49.605810103Z" level=info msg="Container to stop \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.605870 containerd[1442]: time="2024-12-13T01:18:49.605819703Z" level=info msg="Container to stop \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.605870 containerd[1442]: time="2024-12-13T01:18:49.605829663Z" level=info msg="Container to stop \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.605870 containerd[1442]: time="2024-12-13T01:18:49.605839663Z" level=info msg="Container to stop \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:49.610648 systemd[1]: cri-containerd-4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d.scope: Deactivated successfully. Dec 13 01:18:49.629421 containerd[1442]: time="2024-12-13T01:18:49.629363440Z" level=info msg="shim disconnected" id=4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d namespace=k8s.io Dec 13 01:18:49.629421 containerd[1442]: time="2024-12-13T01:18:49.629419400Z" level=warning msg="cleaning up after shim disconnected" id=4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d namespace=k8s.io Dec 13 01:18:49.629421 containerd[1442]: time="2024-12-13T01:18:49.629427960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:49.633477 containerd[1442]: time="2024-12-13T01:18:49.633387137Z" level=info msg="shim disconnected" id=d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2 namespace=k8s.io Dec 13 01:18:49.633477 containerd[1442]: time="2024-12-13T01:18:49.633431337Z" level=warning msg="cleaning up after shim disconnected" id=d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2 namespace=k8s.io Dec 13 01:18:49.633477 containerd[1442]: time="2024-12-13T01:18:49.633438977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:49.646984 containerd[1442]: time="2024-12-13T01:18:49.645992149Z" level=info msg="TearDown network for sandbox \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" successfully" Dec 13 01:18:49.646984 containerd[1442]: time="2024-12-13T01:18:49.646031069Z" level=info msg="StopPodSandbox for \"4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d\" returns successfully" Dec 13 01:18:49.646984 containerd[1442]: time="2024-12-13T01:18:49.646667472Z" level=info msg="TearDown network for sandbox \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\" successfully" Dec 13 01:18:49.646984 containerd[1442]: time="2024-12-13T01:18:49.646692192Z" level=info msg="StopPodSandbox for \"d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2\" returns successfully" Dec 13 01:18:49.734771 kubelet[2548]: I1213 01:18:49.734690 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70d51665-1707-4644-9c11-f52421fd6553-cilium-config-path\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.734771 kubelet[2548]: I1213 01:18:49.734737 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-net\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.734771 kubelet[2548]: I1213 01:18:49.734757 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-hostproc\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.734771 kubelet[2548]: I1213 01:18:49.734773 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-bpf-maps\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734792 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-hubble-tls\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734810 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrmb4\" (UniqueName: \"kubernetes.io/projected/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-kube-api-access-vrmb4\") pod \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\" (UID: \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734825 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-xtables-lock\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734839 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-kernel\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734855 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbrr2\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-kube-api-access-hbrr2\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735404 kubelet[2548]: I1213 01:18:49.734870 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-etc-cni-netd\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734885 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-lib-modules\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734900 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-cilium-config-path\") pod \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\" (UID: \"9f1ac749-793e-4cb4-8adc-abab15ea4dfd\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734915 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-run\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734954 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cni-path\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734974 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-cgroup\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.735597 kubelet[2548]: I1213 01:18:49.734993 2548 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70d51665-1707-4644-9c11-f52421fd6553-clustermesh-secrets\") pod \"70d51665-1707-4644-9c11-f52421fd6553\" (UID: \"70d51665-1707-4644-9c11-f52421fd6553\") " Dec 13 01:18:49.739588 kubelet[2548]: I1213 01:18:49.739294 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.739588 kubelet[2548]: I1213 01:18:49.739373 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-hostproc" (OuterVolumeSpecName: "hostproc") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.739588 kubelet[2548]: I1213 01:18:49.739393 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.741383 kubelet[2548]: I1213 01:18:49.741163 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.741383 kubelet[2548]: I1213 01:18:49.741221 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.741383 kubelet[2548]: I1213 01:18:49.741238 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.747082 kubelet[2548]: I1213 01:18:49.745808 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cni-path" (OuterVolumeSpecName: "cni-path") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.747082 kubelet[2548]: I1213 01:18:49.745856 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.747204 kubelet[2548]: I1213 01:18:49.747173 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f1ac749-793e-4cb4-8adc-abab15ea4dfd" (UID: "9f1ac749-793e-4cb4-8adc-abab15ea4dfd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:49.747204 kubelet[2548]: I1213 01:18:49.747188 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d51665-1707-4644-9c11-f52421fd6553-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:49.747257 kubelet[2548]: I1213 01:18:49.747236 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.747799 kubelet[2548]: I1213 01:18:49.747764 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:49.747941 kubelet[2548]: I1213 01:18:49.747905 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:49.749310 kubelet[2548]: I1213 01:18:49.749261 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d51665-1707-4644-9c11-f52421fd6553-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:18:49.749802 kubelet[2548]: I1213 01:18:49.749773 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-kube-api-access-hbrr2" (OuterVolumeSpecName: "kube-api-access-hbrr2") pod "70d51665-1707-4644-9c11-f52421fd6553" (UID: "70d51665-1707-4644-9c11-f52421fd6553"). InnerVolumeSpecName "kube-api-access-hbrr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:49.750230 kubelet[2548]: I1213 01:18:49.750190 2548 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-kube-api-access-vrmb4" (OuterVolumeSpecName: "kube-api-access-vrmb4") pod "9f1ac749-793e-4cb4-8adc-abab15ea4dfd" (UID: "9f1ac749-793e-4cb4-8adc-abab15ea4dfd"). InnerVolumeSpecName "kube-api-access-vrmb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:49.750427 kubelet[2548]: I1213 01:18:49.750406 2548 scope.go:117] "RemoveContainer" containerID="8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037" Dec 13 01:18:49.752068 containerd[1442]: time="2024-12-13T01:18:49.752031947Z" level=info msg="RemoveContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\"" Dec 13 01:18:49.755203 systemd[1]: Removed slice kubepods-besteffort-pod9f1ac749_793e_4cb4_8adc_abab15ea4dfd.slice - libcontainer container kubepods-besteffort-pod9f1ac749_793e_4cb4_8adc_abab15ea4dfd.slice. Dec 13 01:18:49.761643 systemd[1]: Removed slice kubepods-burstable-pod70d51665_1707_4644_9c11_f52421fd6553.slice - libcontainer container kubepods-burstable-pod70d51665_1707_4644_9c11_f52421fd6553.slice. Dec 13 01:18:49.762628 containerd[1442]: time="2024-12-13T01:18:49.762053989Z" level=info msg="RemoveContainer for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" returns successfully" Dec 13 01:18:49.761727 systemd[1]: kubepods-burstable-pod70d51665_1707_4644_9c11_f52421fd6553.slice: Consumed 6.603s CPU time. Dec 13 01:18:49.763041 kubelet[2548]: I1213 01:18:49.762891 2548 scope.go:117] "RemoveContainer" containerID="8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037" Dec 13 01:18:49.763202 containerd[1442]: time="2024-12-13T01:18:49.763160553Z" level=error msg="ContainerStatus for \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\": not found" Dec 13 01:18:49.773443 kubelet[2548]: E1213 01:18:49.773410 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\": not found" containerID="8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037" Dec 13 01:18:49.773535 kubelet[2548]: I1213 01:18:49.773447 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037"} err="failed to get container status \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fb8667701c4bd5e4f2638a1d481f901a352ddec0bd9331a1e7b63818e939037\": not found" Dec 13 01:18:49.773535 kubelet[2548]: I1213 01:18:49.773525 2548 scope.go:117] "RemoveContainer" containerID="a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab" Dec 13 01:18:49.775909 containerd[1442]: time="2024-12-13T01:18:49.775636765Z" level=info msg="RemoveContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\"" Dec 13 01:18:49.779815 containerd[1442]: time="2024-12-13T01:18:49.779776262Z" level=info msg="RemoveContainer for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" returns successfully" Dec 13 01:18:49.780214 kubelet[2548]: I1213 01:18:49.780105 2548 scope.go:117] "RemoveContainer" containerID="c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e" Dec 13 01:18:49.782498 containerd[1442]: time="2024-12-13T01:18:49.782469993Z" level=info msg="RemoveContainer for \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\"" Dec 13 01:18:49.785303 containerd[1442]: time="2024-12-13T01:18:49.785216525Z" level=info msg="RemoveContainer for \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\" returns successfully" Dec 13 01:18:49.785713 kubelet[2548]: I1213 01:18:49.785454 2548 scope.go:117] "RemoveContainer" containerID="13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475" Dec 13 01:18:49.787153 containerd[1442]: time="2024-12-13T01:18:49.787130453Z" level=info msg="RemoveContainer for \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\"" Dec 13 01:18:49.789592 containerd[1442]: time="2024-12-13T01:18:49.789472942Z" level=info msg="RemoveContainer for \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\" returns successfully" Dec 13 01:18:49.789750 kubelet[2548]: I1213 01:18:49.789667 2548 scope.go:117] "RemoveContainer" containerID="f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5" Dec 13 01:18:49.790673 containerd[1442]: time="2024-12-13T01:18:49.790646667Z" level=info msg="RemoveContainer for \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\"" Dec 13 01:18:49.793995 containerd[1442]: time="2024-12-13T01:18:49.793918121Z" level=info msg="RemoveContainer for \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\" returns successfully" Dec 13 01:18:49.794186 kubelet[2548]: I1213 01:18:49.794136 2548 scope.go:117] "RemoveContainer" containerID="a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee" Dec 13 01:18:49.795487 containerd[1442]: time="2024-12-13T01:18:49.795234966Z" level=info msg="RemoveContainer for \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\"" Dec 13 01:18:49.797620 containerd[1442]: time="2024-12-13T01:18:49.797585736Z" level=info msg="RemoveContainer for \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\" returns successfully" Dec 13 01:18:49.798032 kubelet[2548]: I1213 01:18:49.797868 2548 scope.go:117] "RemoveContainer" containerID="a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab" Dec 13 01:18:49.798351 containerd[1442]: time="2024-12-13T01:18:49.798108338Z" level=error msg="ContainerStatus for \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\": not found" Dec 13 01:18:49.798407 kubelet[2548]: E1213 01:18:49.798236 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\": not found" containerID="a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab" Dec 13 01:18:49.798407 kubelet[2548]: I1213 01:18:49.798260 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab"} err="failed to get container status \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"a56680e235e5ee41f8c0c5f631bb69e37d19fc580c7a0e6f1ed26cc619a517ab\": not found" Dec 13 01:18:49.798407 kubelet[2548]: I1213 01:18:49.798280 2548 scope.go:117] "RemoveContainer" containerID="c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e" Dec 13 01:18:49.798520 containerd[1442]: time="2024-12-13T01:18:49.798436659Z" level=error msg="ContainerStatus for \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\": not found" Dec 13 01:18:49.798618 kubelet[2548]: E1213 01:18:49.798547 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\": not found" containerID="c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e" Dec 13 01:18:49.798618 kubelet[2548]: I1213 01:18:49.798577 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e"} err="failed to get container status \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c55eb362692384c55f1cac3ab140465ec9130088257dd34298db0c222ce64f0e\": not found" Dec 13 01:18:49.798618 kubelet[2548]: I1213 01:18:49.798593 2548 scope.go:117] "RemoveContainer" containerID="13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475" Dec 13 01:18:49.798824 containerd[1442]: time="2024-12-13T01:18:49.798770661Z" level=error msg="ContainerStatus for \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\": not found" Dec 13 01:18:49.799053 kubelet[2548]: E1213 01:18:49.798942 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\": not found" containerID="13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475" Dec 13 01:18:49.799053 kubelet[2548]: I1213 01:18:49.798969 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475"} err="failed to get container status \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\": rpc error: code = NotFound desc = an error occurred when try to find container \"13ec8b486fb0280414ae1e44bd209528907fc3bb5332ef53b08056aab6d1c475\": not found" Dec 13 01:18:49.799053 kubelet[2548]: I1213 01:18:49.798984 2548 scope.go:117] "RemoveContainer" containerID="f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5" Dec 13 01:18:49.799454 containerd[1442]: time="2024-12-13T01:18:49.799272183Z" level=error msg="ContainerStatus for \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\": not found" Dec 13 01:18:49.799648 kubelet[2548]: E1213 01:18:49.799423 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\": not found" containerID="f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5" Dec 13 01:18:49.799648 kubelet[2548]: I1213 01:18:49.799591 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5"} err="failed to get container status \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccd2187530a6ee37cec6cb598d61a49656368685aafdf2d5aa68553b38f4a5\": not found" Dec 13 01:18:49.799648 kubelet[2548]: I1213 01:18:49.799609 2548 scope.go:117] "RemoveContainer" containerID="a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee" Dec 13 01:18:49.800077 containerd[1442]: time="2024-12-13T01:18:49.799979866Z" level=error msg="ContainerStatus for \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\": not found" Dec 13 01:18:49.800124 kubelet[2548]: E1213 01:18:49.800083 2548 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\": not found" containerID="a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee" Dec 13 01:18:49.800124 kubelet[2548]: I1213 01:18:49.800103 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee"} err="failed to get container status \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"a46ab0bf573666cd1f80a21207c9105cec7ee30db79690d3c670743d97fa68ee\": not found" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835487 2548 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835519 2548 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835527 2548 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835536 2548 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70d51665-1707-4644-9c11-f52421fd6553-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835544 2548 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835551 2548 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835558 2548 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835645 kubelet[2548]: I1213 01:18:49.835566 2548 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70d51665-1707-4644-9c11-f52421fd6553-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835573 2548 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835581 2548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vrmb4\" (UniqueName: \"kubernetes.io/projected/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-kube-api-access-vrmb4\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835589 2548 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835597 2548 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835604 2548 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hbrr2\" (UniqueName: \"kubernetes.io/projected/70d51665-1707-4644-9c11-f52421fd6553-kube-api-access-hbrr2\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835611 2548 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835618 2548 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d51665-1707-4644-9c11-f52421fd6553-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:49.835894 kubelet[2548]: I1213 01:18:49.835626 2548 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f1ac749-793e-4cb4-8adc-abab15ea4dfd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:50.509147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d076e024c86a1e123137e0e458d0c95e0e84c352f1e1a128d31432b1a19bb5a2-rootfs.mount: Deactivated successfully. Dec 13 01:18:50.509241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d-rootfs.mount: Deactivated successfully. Dec 13 01:18:50.509295 systemd[1]: var-lib-kubelet-pods-9f1ac749\x2d793e\x2d4cb4\x2d8adc\x2dabab15ea4dfd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvrmb4.mount: Deactivated successfully. Dec 13 01:18:50.509358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e356538ad86728c13cbce6964a7bc74086a33c9d5b0e8c6cc35c3f3550ef62d-shm.mount: Deactivated successfully. Dec 13 01:18:50.509420 systemd[1]: var-lib-kubelet-pods-70d51665\x2d1707\x2d4644\x2d9c11\x2df52421fd6553-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhbrr2.mount: Deactivated successfully. Dec 13 01:18:50.509470 systemd[1]: var-lib-kubelet-pods-70d51665\x2d1707\x2d4644\x2d9c11\x2df52421fd6553-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:18:50.509521 systemd[1]: var-lib-kubelet-pods-70d51665\x2d1707\x2d4644\x2d9c11\x2df52421fd6553-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:18:51.449715 sshd[4177]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:51.462026 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:34190.service: Deactivated successfully. Dec 13 01:18:51.464833 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:18:51.466048 systemd[1]: session-23.scope: Consumed 1.370s CPU time. Dec 13 01:18:51.467389 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:18:51.474821 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:34198.service - OpenSSH per-connection server daemon (10.0.0.1:34198). Dec 13 01:18:51.476310 systemd-logind[1420]: Removed session 23. Dec 13 01:18:51.508171 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 34198 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:51.509466 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:51.513461 systemd-logind[1420]: New session 24 of user core. Dec 13 01:18:51.522105 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:18:51.562793 kubelet[2548]: I1213 01:18:51.562758 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d51665-1707-4644-9c11-f52421fd6553" path="/var/lib/kubelet/pods/70d51665-1707-4644-9c11-f52421fd6553/volumes" Dec 13 01:18:51.563510 kubelet[2548]: I1213 01:18:51.563472 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1ac749-793e-4cb4-8adc-abab15ea4dfd" path="/var/lib/kubelet/pods/9f1ac749-793e-4cb4-8adc-abab15ea4dfd/volumes" Dec 13 01:18:51.612360 kubelet[2548]: E1213 01:18:51.612324 2548 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:52.464744 sshd[4339]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:52.478450 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:34198.service: Deactivated successfully. Dec 13 01:18:52.481497 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:18:52.482732 kubelet[2548]: I1213 01:18:52.482694 2548 topology_manager.go:215] "Topology Admit Handler" podUID="33bb2d78-fcd9-4612-9966-fc124dd9de63" podNamespace="kube-system" podName="cilium-qlknp" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482752 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="mount-cgroup" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482762 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="apply-sysctl-overwrites" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482768 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="clean-cilium-state" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482773 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="cilium-agent" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482780 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="mount-bpf-fs" Dec 13 01:18:52.482841 kubelet[2548]: E1213 01:18:52.482786 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f1ac749-793e-4cb4-8adc-abab15ea4dfd" containerName="cilium-operator" Dec 13 01:18:52.482841 kubelet[2548]: I1213 01:18:52.482805 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d51665-1707-4644-9c11-f52421fd6553" containerName="cilium-agent" Dec 13 01:18:52.482841 kubelet[2548]: I1213 01:18:52.482812 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1ac749-793e-4cb4-8adc-abab15ea4dfd" containerName="cilium-operator" Dec 13 01:18:52.483338 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:18:52.498054 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:56492.service - OpenSSH per-connection server daemon (10.0.0.1:56492). Dec 13 01:18:52.505516 systemd-logind[1420]: Removed session 24. Dec 13 01:18:52.515347 systemd[1]: Created slice kubepods-burstable-pod33bb2d78_fcd9_4612_9966_fc124dd9de63.slice - libcontainer container kubepods-burstable-pod33bb2d78_fcd9_4612_9966_fc124dd9de63.slice. Dec 13 01:18:52.540132 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 56492 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:52.541584 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:52.547031 systemd-logind[1420]: New session 25 of user core. Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550110 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-cilium-run\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550147 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-cni-path\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550166 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33bb2d78-fcd9-4612-9966-fc124dd9de63-cilium-config-path\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550185 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-host-proc-sys-kernel\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550202 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-bpf-maps\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550461 kubelet[2548]: I1213 01:18:52.550216 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-etc-cni-netd\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550691 kubelet[2548]: I1213 01:18:52.550231 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-lib-modules\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550691 kubelet[2548]: I1213 01:18:52.550246 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33bb2d78-fcd9-4612-9966-fc124dd9de63-cilium-ipsec-secrets\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550691 kubelet[2548]: I1213 01:18:52.550261 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33bb2d78-fcd9-4612-9966-fc124dd9de63-clustermesh-secrets\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550691 kubelet[2548]: I1213 01:18:52.550278 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-host-proc-sys-net\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550691 kubelet[2548]: I1213 01:18:52.550294 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwfgf\" (UniqueName: \"kubernetes.io/projected/33bb2d78-fcd9-4612-9966-fc124dd9de63-kube-api-access-gwfgf\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550797 kubelet[2548]: I1213 01:18:52.550313 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-cilium-cgroup\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550797 kubelet[2548]: I1213 01:18:52.550337 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-hostproc\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550797 kubelet[2548]: I1213 01:18:52.550351 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33bb2d78-fcd9-4612-9966-fc124dd9de63-xtables-lock\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.550797 kubelet[2548]: I1213 01:18:52.550365 2548 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33bb2d78-fcd9-4612-9966-fc124dd9de63-hubble-tls\") pod \"cilium-qlknp\" (UID: \"33bb2d78-fcd9-4612-9966-fc124dd9de63\") " pod="kube-system/cilium-qlknp" Dec 13 01:18:52.558081 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:18:52.606722 sshd[4352]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:52.616367 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:56492.service: Deactivated successfully. Dec 13 01:18:52.617999 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:18:52.619248 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:18:52.625250 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:56498.service - OpenSSH per-connection server daemon (10.0.0.1:56498). Dec 13 01:18:52.626265 systemd-logind[1420]: Removed session 25. Dec 13 01:18:52.658437 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 56498 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:18:52.661029 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:52.667739 systemd-logind[1420]: New session 26 of user core. Dec 13 01:18:52.675067 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:18:52.818801 kubelet[2548]: E1213 01:18:52.818702 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.820108 containerd[1442]: time="2024-12-13T01:18:52.820055186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlknp,Uid:33bb2d78-fcd9-4612-9966-fc124dd9de63,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:52.836207 containerd[1442]: time="2024-12-13T01:18:52.836130688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:52.836207 containerd[1442]: time="2024-12-13T01:18:52.836176928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:52.836207 containerd[1442]: time="2024-12-13T01:18:52.836190608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:52.837049 containerd[1442]: time="2024-12-13T01:18:52.836259368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:52.857198 systemd[1]: Started cri-containerd-3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c.scope - libcontainer container 3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c. Dec 13 01:18:52.883059 containerd[1442]: time="2024-12-13T01:18:52.883017467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlknp,Uid:33bb2d78-fcd9-4612-9966-fc124dd9de63,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\"" Dec 13 01:18:52.883689 kubelet[2548]: E1213 01:18:52.883666 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.886068 containerd[1442]: time="2024-12-13T01:18:52.886027638Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:18:52.897560 containerd[1442]: time="2024-12-13T01:18:52.897504482Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad\"" Dec 13 01:18:52.898168 containerd[1442]: time="2024-12-13T01:18:52.898079285Z" level=info msg="StartContainer for \"2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad\"" Dec 13 01:18:52.933105 systemd[1]: Started cri-containerd-2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad.scope - libcontainer container 2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad. Dec 13 01:18:52.953899 containerd[1442]: time="2024-12-13T01:18:52.953855698Z" level=info msg="StartContainer for \"2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad\" returns successfully" Dec 13 01:18:52.961190 systemd[1]: cri-containerd-2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad.scope: Deactivated successfully. Dec 13 01:18:52.991206 containerd[1442]: time="2024-12-13T01:18:52.991083120Z" level=info msg="shim disconnected" id=2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad namespace=k8s.io Dec 13 01:18:52.991206 containerd[1442]: time="2024-12-13T01:18:52.991137041Z" level=warning msg="cleaning up after shim disconnected" id=2d387320ac85e0c39fb78408ac11b467f04dda7532b8553c41c87b794831c1ad namespace=k8s.io Dec 13 01:18:52.991206 containerd[1442]: time="2024-12-13T01:18:52.991144961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:53.462761 kubelet[2548]: I1213 01:18:53.462720 2548 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:18:53Z","lastTransitionTime":"2024-12-13T01:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:18:53.767355 kubelet[2548]: E1213 01:18:53.767231 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:53.769861 containerd[1442]: time="2024-12-13T01:18:53.769676464Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:18:53.780279 containerd[1442]: time="2024-12-13T01:18:53.780165823Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1\"" Dec 13 01:18:53.780838 containerd[1442]: time="2024-12-13T01:18:53.780809026Z" level=info msg="StartContainer for \"3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1\"" Dec 13 01:18:53.802036 systemd[1]: run-containerd-runc-k8s.io-3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1-runc.XrslT7.mount: Deactivated successfully. Dec 13 01:18:53.820105 systemd[1]: Started cri-containerd-3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1.scope - libcontainer container 3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1. Dec 13 01:18:53.842357 containerd[1442]: time="2024-12-13T01:18:53.842305695Z" level=info msg="StartContainer for \"3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1\" returns successfully" Dec 13 01:18:53.849284 systemd[1]: cri-containerd-3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1.scope: Deactivated successfully. Dec 13 01:18:53.869493 containerd[1442]: time="2024-12-13T01:18:53.869441036Z" level=info msg="shim disconnected" id=3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1 namespace=k8s.io Dec 13 01:18:53.869493 containerd[1442]: time="2024-12-13T01:18:53.869490836Z" level=warning msg="cleaning up after shim disconnected" id=3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1 namespace=k8s.io Dec 13 01:18:53.869493 containerd[1442]: time="2024-12-13T01:18:53.869499836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:54.655440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e43dd77c269d50c2f16ba562af3867d2fdf5bae4442cc52391bc5cc205c1ff1-rootfs.mount: Deactivated successfully. Dec 13 01:18:54.770576 kubelet[2548]: E1213 01:18:54.770546 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:54.774720 containerd[1442]: time="2024-12-13T01:18:54.774684578Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:18:54.804284 containerd[1442]: time="2024-12-13T01:18:54.803264842Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e\"" Dec 13 01:18:54.806214 containerd[1442]: time="2024-12-13T01:18:54.806177012Z" level=info msg="StartContainer for \"ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e\"" Dec 13 01:18:54.836130 systemd[1]: Started cri-containerd-ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e.scope - libcontainer container ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e. Dec 13 01:18:54.857041 systemd[1]: cri-containerd-ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e.scope: Deactivated successfully. Dec 13 01:18:54.858780 containerd[1442]: time="2024-12-13T01:18:54.858697923Z" level=info msg="StartContainer for \"ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e\" returns successfully" Dec 13 01:18:54.877724 containerd[1442]: time="2024-12-13T01:18:54.877670632Z" level=info msg="shim disconnected" id=ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e namespace=k8s.io Dec 13 01:18:54.877724 containerd[1442]: time="2024-12-13T01:18:54.877723872Z" level=warning msg="cleaning up after shim disconnected" id=ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e namespace=k8s.io Dec 13 01:18:54.877880 containerd[1442]: time="2024-12-13T01:18:54.877733952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:55.655467 systemd[1]: run-containerd-runc-k8s.io-ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e-runc.CFIZVz.mount: Deactivated successfully. Dec 13 01:18:55.655561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff7f881f3cd77a719d4ba8547da6ce0809cc7424f61e066a61c304dfea707c9e-rootfs.mount: Deactivated successfully. Dec 13 01:18:55.774208 kubelet[2548]: E1213 01:18:55.774184 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:55.778446 containerd[1442]: time="2024-12-13T01:18:55.778387354Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:18:55.790897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013638938.mount: Deactivated successfully. Dec 13 01:18:55.792437 containerd[1442]: time="2024-12-13T01:18:55.792399524Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b\"" Dec 13 01:18:55.794014 containerd[1442]: time="2024-12-13T01:18:55.792823565Z" level=info msg="StartContainer for \"5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b\"" Dec 13 01:18:55.825128 systemd[1]: Started cri-containerd-5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b.scope - libcontainer container 5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b. Dec 13 01:18:55.842142 systemd[1]: cri-containerd-5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b.scope: Deactivated successfully. Dec 13 01:18:55.843766 containerd[1442]: time="2024-12-13T01:18:55.843681586Z" level=info msg="StartContainer for \"5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b\" returns successfully" Dec 13 01:18:55.862612 containerd[1442]: time="2024-12-13T01:18:55.862563893Z" level=info msg="shim disconnected" id=5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b namespace=k8s.io Dec 13 01:18:55.862612 containerd[1442]: time="2024-12-13T01:18:55.862610133Z" level=warning msg="cleaning up after shim disconnected" id=5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b namespace=k8s.io Dec 13 01:18:55.862612 containerd[1442]: time="2024-12-13T01:18:55.862618213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:56.613451 kubelet[2548]: E1213 01:18:56.613403 2548 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:56.655520 systemd[1]: run-containerd-runc-k8s.io-5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b-runc.IOcMqg.mount: Deactivated successfully. Dec 13 01:18:56.655617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5986112737d241ff34ca0df7dbe729161c56cfa5c2e06c860582e8067f12626b-rootfs.mount: Deactivated successfully. Dec 13 01:18:56.778166 kubelet[2548]: E1213 01:18:56.778136 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:56.780094 containerd[1442]: time="2024-12-13T01:18:56.780060154Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:18:56.797268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125778224.mount: Deactivated successfully. Dec 13 01:18:56.799401 containerd[1442]: time="2024-12-13T01:18:56.799364981Z" level=info msg="CreateContainer within sandbox \"3a42d5e65e583c183aaf79e3263af484815010c2185e6b0f10a86326cb798b1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbad34096d0b162269c930d1ba1fe0053f63dfd9c1258061272e8bed5eff0dc9\"" Dec 13 01:18:56.799831 containerd[1442]: time="2024-12-13T01:18:56.799809982Z" level=info msg="StartContainer for \"fbad34096d0b162269c930d1ba1fe0053f63dfd9c1258061272e8bed5eff0dc9\"" Dec 13 01:18:56.827509 systemd[1]: Started cri-containerd-fbad34096d0b162269c930d1ba1fe0053f63dfd9c1258061272e8bed5eff0dc9.scope - libcontainer container fbad34096d0b162269c930d1ba1fe0053f63dfd9c1258061272e8bed5eff0dc9. Dec 13 01:18:56.856965 containerd[1442]: time="2024-12-13T01:18:56.856915579Z" level=info msg="StartContainer for \"fbad34096d0b162269c930d1ba1fe0053f63dfd9c1258061272e8bed5eff0dc9\" returns successfully" Dec 13 01:18:57.119047 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:18:57.784038 kubelet[2548]: E1213 01:18:57.784004 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:57.799196 kubelet[2548]: I1213 01:18:57.799137 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qlknp" podStartSLOduration=5.799119885 podStartE2EDuration="5.799119885s" podCreationTimestamp="2024-12-13 01:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:57.798272123 +0000 UTC m=+86.310733100" watchObservedRunningTime="2024-12-13 01:18:57.799119885 +0000 UTC m=+86.311580822" Dec 13 01:18:58.820118 kubelet[2548]: E1213 01:18:58.820076 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:59.951535 systemd-networkd[1379]: lxc_health: Link UP Dec 13 01:18:59.957858 systemd-networkd[1379]: lxc_health: Gained carrier Dec 13 01:19:00.821039 kubelet[2548]: E1213 01:19:00.820695 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:01.561049 kubelet[2548]: E1213 01:19:01.560648 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:01.584117 systemd-networkd[1379]: lxc_health: Gained IPv6LL Dec 13 01:19:01.793193 kubelet[2548]: E1213 01:19:01.792918 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:02.795216 kubelet[2548]: E1213 01:19:02.795179 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:05.478954 sshd[4360]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:05.481903 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:56498.service: Deactivated successfully. Dec 13 01:19:05.483679 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:19:05.485246 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:19:05.486150 systemd-logind[1420]: Removed session 26. Dec 13 01:19:05.560293 kubelet[2548]: E1213 01:19:05.560226 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"