Dec 13 13:28:23.859706 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:28:23.859728 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:28:23.859739 kernel: BIOS-provided physical RAM map: Dec 13 13:28:23.859746 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:28:23.859752 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:28:23.859759 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:28:23.859766 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 13:28:23.859773 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 13:28:23.859780 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:28:23.859788 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:28:23.859795 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:28:23.859801 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:28:23.859808 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:28:23.859814 kernel: NX (Execute Disable) protection: active Dec 13 13:28:23.859842 kernel: APIC: Static calls initialized Dec 13 13:28:23.859852 kernel: SMBIOS 2.8 present. Dec 13 13:28:23.859859 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 13:28:23.859867 kernel: Hypervisor detected: KVM Dec 13 13:28:23.859874 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:28:23.859881 kernel: kvm-clock: using sched offset of 2275863179 cycles Dec 13 13:28:23.859888 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:28:23.859896 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:28:23.859903 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:28:23.859911 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:28:23.859918 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 13:28:23.859928 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:28:23.859935 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:28:23.859943 kernel: Using GB pages for direct mapping Dec 13 13:28:23.859950 kernel: ACPI: Early table checksum verification disabled Dec 13 13:28:23.859957 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 13:28:23.859964 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.859972 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.859979 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.859986 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 13:28:23.859995 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.860003 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.860010 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.860017 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:28:23.860024 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 13:28:23.860032 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 13:28:23.860042 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 13:28:23.860052 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 13:28:23.860059 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 13:28:23.860067 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 13:28:23.860074 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 13:28:23.860082 kernel: No NUMA configuration found Dec 13 13:28:23.860089 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 13:28:23.860097 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 13:28:23.860106 kernel: Zone ranges: Dec 13 13:28:23.860114 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:28:23.860121 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 13:28:23.860129 kernel: Normal empty Dec 13 13:28:23.860136 kernel: Movable zone start for each node Dec 13 13:28:23.860144 kernel: Early memory node ranges Dec 13 13:28:23.860157 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:28:23.860165 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 13:28:23.860172 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 13:28:23.860182 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:28:23.860189 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:28:23.860197 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 13:28:23.860204 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:28:23.860212 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:28:23.860219 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:28:23.860227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:28:23.860234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:28:23.860241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:28:23.860249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:28:23.860259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:28:23.860266 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:28:23.860273 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:28:23.860281 kernel: TSC deadline timer available Dec 13 13:28:23.860288 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:28:23.860295 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:28:23.860303 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:28:23.860310 kernel: kvm-guest: setup PV sched yield Dec 13 13:28:23.860318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:28:23.860327 kernel: Booting paravirtualized kernel on KVM Dec 13 13:28:23.860335 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:28:23.860343 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:28:23.860350 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:28:23.860357 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:28:23.860365 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:28:23.860372 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:28:23.860379 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:28:23.860388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:28:23.860398 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:28:23.860406 kernel: random: crng init done Dec 13 13:28:23.860413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:28:23.860421 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:28:23.860428 kernel: Fallback order for Node 0: 0 Dec 13 13:28:23.860436 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 13:28:23.860443 kernel: Policy zone: DMA32 Dec 13 13:28:23.860450 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:28:23.860460 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 138948K reserved, 0K cma-reserved) Dec 13 13:28:23.860468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:28:23.860475 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:28:23.860483 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:28:23.860490 kernel: Dynamic Preempt: voluntary Dec 13 13:28:23.860497 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:28:23.860506 kernel: rcu: RCU event tracing is enabled. Dec 13 13:28:23.860513 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:28:23.860521 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:28:23.860535 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:28:23.860544 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:28:23.860554 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:28:23.860561 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:28:23.860569 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:28:23.860576 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:28:23.860584 kernel: Console: colour VGA+ 80x25 Dec 13 13:28:23.860591 kernel: printk: console [ttyS0] enabled Dec 13 13:28:23.860598 kernel: ACPI: Core revision 20230628 Dec 13 13:28:23.860606 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:28:23.860616 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:28:23.860623 kernel: x2apic enabled Dec 13 13:28:23.860631 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:28:23.860638 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:28:23.860646 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:28:23.860653 kernel: kvm-guest: setup PV IPIs Dec 13 13:28:23.860668 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:28:23.860678 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:28:23.860686 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:28:23.860694 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:28:23.860701 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:28:23.860709 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:28:23.860719 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:28:23.860727 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:28:23.860735 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:28:23.860743 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:28:23.860752 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:28:23.860760 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:28:23.860770 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:28:23.860779 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:28:23.860788 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:28:23.860797 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:28:23.860805 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:28:23.860813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:28:23.860821 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:28:23.860843 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:28:23.860851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:28:23.860859 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:28:23.860867 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:28:23.860874 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:28:23.860882 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:28:23.860890 kernel: landlock: Up and running. Dec 13 13:28:23.860897 kernel: SELinux: Initializing. Dec 13 13:28:23.860905 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:28:23.860915 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:28:23.860923 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:28:23.860931 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:28:23.860939 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:28:23.860947 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:28:23.860954 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:28:23.860962 kernel: ... version: 0 Dec 13 13:28:23.860970 kernel: ... bit width: 48 Dec 13 13:28:23.860980 kernel: ... generic registers: 6 Dec 13 13:28:23.860988 kernel: ... value mask: 0000ffffffffffff Dec 13 13:28:23.860996 kernel: ... max period: 00007fffffffffff Dec 13 13:28:23.861003 kernel: ... fixed-purpose events: 0 Dec 13 13:28:23.861011 kernel: ... event mask: 000000000000003f Dec 13 13:28:23.861019 kernel: signal: max sigframe size: 1776 Dec 13 13:28:23.861026 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:28:23.861034 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:28:23.861042 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:28:23.861050 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:28:23.861060 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:28:23.861067 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:28:23.861075 kernel: smpboot: Max logical packages: 1 Dec 13 13:28:23.861083 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:28:23.861091 kernel: devtmpfs: initialized Dec 13 13:28:23.861098 kernel: x86/mm: Memory block size: 128MB Dec 13 13:28:23.861106 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:28:23.861114 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:28:23.861122 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:28:23.861132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:28:23.861140 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:28:23.861148 kernel: audit: type=2000 audit(1734096503.585:1): state=initialized audit_enabled=0 res=1 Dec 13 13:28:23.861160 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:28:23.861168 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:28:23.861176 kernel: cpuidle: using governor menu Dec 13 13:28:23.861183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:28:23.861191 kernel: dca service started, version 1.12.1 Dec 13 13:28:23.861199 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:28:23.861209 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:28:23.861217 kernel: PCI: Using configuration type 1 for base access Dec 13 13:28:23.861225 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:28:23.861233 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:28:23.861240 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:28:23.861248 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:28:23.861256 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:28:23.861264 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:28:23.861271 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:28:23.861281 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:28:23.861289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:28:23.861296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:28:23.861304 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:28:23.861312 kernel: ACPI: Interpreter enabled Dec 13 13:28:23.861319 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:28:23.861327 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:28:23.861335 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:28:23.861343 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:28:23.861353 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:28:23.861360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:28:23.861539 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:28:23.861674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:28:23.861800 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:28:23.861811 kernel: PCI host bridge to bus 0000:00 Dec 13 13:28:23.861950 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:28:23.862067 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:28:23.862185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:28:23.862296 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 13:28:23.862406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:28:23.862516 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 13:28:23.862638 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:28:23.862789 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:28:23.862940 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:28:23.863062 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 13:28:23.863192 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 13:28:23.863313 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 13:28:23.863433 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:28:23.863571 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:28:23.863702 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:28:23.863838 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 13:28:23.863963 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 13:28:23.864093 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:28:23.864226 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:28:23.864347 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 13:28:23.864469 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 13:28:23.864613 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:28:23.864741 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 13:28:23.864880 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 13:28:23.865004 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 13:28:23.865124 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 13:28:23.865265 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:28:23.865388 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:28:23.865522 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:28:23.865655 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 13:28:23.865776 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 13:28:23.865948 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:28:23.866070 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:28:23.866081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:28:23.866093 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:28:23.866101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:28:23.866109 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:28:23.866117 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:28:23.866125 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:28:23.866133 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:28:23.866140 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:28:23.866148 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:28:23.866163 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:28:23.866174 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:28:23.866182 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:28:23.866189 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:28:23.866197 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:28:23.866205 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:28:23.866212 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:28:23.866220 kernel: iommu: Default domain type: Translated Dec 13 13:28:23.866228 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:28:23.866236 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:28:23.866246 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:28:23.866253 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:28:23.866261 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 13:28:23.866383 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:28:23.866503 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:28:23.866632 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:28:23.866643 kernel: vgaarb: loaded Dec 13 13:28:23.866651 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:28:23.866659 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:28:23.866671 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:28:23.866679 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:28:23.866687 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:28:23.866694 kernel: pnp: PnP ACPI init Dec 13 13:28:23.866841 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:28:23.866853 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:28:23.866862 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:28:23.866870 kernel: NET: Registered PF_INET protocol family Dec 13 13:28:23.866881 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:28:23.866889 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:28:23.866897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:28:23.866905 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:28:23.866913 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:28:23.866921 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:28:23.866929 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:28:23.866937 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:28:23.866947 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:28:23.866955 kernel: NET: Registered PF_XDP protocol family Dec 13 13:28:23.867068 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:28:23.867188 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:28:23.867299 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:28:23.867408 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 13:28:23.867518 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:28:23.867639 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 13:28:23.867651 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:28:23.867663 kernel: Initialise system trusted keyrings Dec 13 13:28:23.867671 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:28:23.867679 kernel: Key type asymmetric registered Dec 13 13:28:23.867687 kernel: Asymmetric key parser 'x509' registered Dec 13 13:28:23.867695 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:28:23.867702 kernel: io scheduler mq-deadline registered Dec 13 13:28:23.867711 kernel: io scheduler kyber registered Dec 13 13:28:23.867719 kernel: io scheduler bfq registered Dec 13 13:28:23.867726 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:28:23.867737 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:28:23.867745 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:28:23.867753 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:28:23.867761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:28:23.867769 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:28:23.867777 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:28:23.867784 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:28:23.867792 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:28:23.867800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:28:23.867942 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:28:23.868057 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:28:23.868181 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:28:23 UTC (1734096503) Dec 13 13:28:23.868295 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 13:28:23.868311 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:28:23.868319 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:28:23.868327 kernel: Segment Routing with IPv6 Dec 13 13:28:23.868335 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:28:23.868347 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:28:23.868355 kernel: Key type dns_resolver registered Dec 13 13:28:23.868362 kernel: IPI shorthand broadcast: enabled Dec 13 13:28:23.868370 kernel: sched_clock: Marking stable (546003800, 105441933)->(695553008, -44107275) Dec 13 13:28:23.868378 kernel: registered taskstats version 1 Dec 13 13:28:23.868386 kernel: Loading compiled-in X.509 certificates Dec 13 13:28:23.868394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:28:23.868402 kernel: Key type .fscrypt registered Dec 13 13:28:23.868409 kernel: Key type fscrypt-provisioning registered Dec 13 13:28:23.868420 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:28:23.868427 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:28:23.868435 kernel: ima: No architecture policies found Dec 13 13:28:23.868443 kernel: clk: Disabling unused clocks Dec 13 13:28:23.868451 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:28:23.868459 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:28:23.868467 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:28:23.868475 kernel: Run /init as init process Dec 13 13:28:23.868482 kernel: with arguments: Dec 13 13:28:23.868492 kernel: /init Dec 13 13:28:23.868500 kernel: with environment: Dec 13 13:28:23.868508 kernel: HOME=/ Dec 13 13:28:23.868515 kernel: TERM=linux Dec 13 13:28:23.868525 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:28:23.868538 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:28:23.868551 systemd[1]: Detected virtualization kvm. Dec 13 13:28:23.868564 systemd[1]: Detected architecture x86-64. Dec 13 13:28:23.868575 systemd[1]: Running in initrd. Dec 13 13:28:23.868585 systemd[1]: No hostname configured, using default hostname. Dec 13 13:28:23.868596 systemd[1]: Hostname set to . Dec 13 13:28:23.868605 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:28:23.868614 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:28:23.868622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:28:23.868631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:28:23.868643 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:28:23.868664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:28:23.868675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:28:23.868684 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:28:23.868694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:28:23.868705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:28:23.868714 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:28:23.868723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:28:23.868731 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:28:23.868740 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:28:23.868749 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:28:23.868757 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:28:23.868766 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:28:23.868777 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:28:23.868789 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:28:23.868799 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:28:23.868808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:28:23.868817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:28:23.868880 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:28:23.868889 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:28:23.868898 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:28:23.868907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:28:23.868918 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:28:23.868927 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:28:23.868935 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:28:23.868944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:28:23.868953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:23.868961 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:28:23.868970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:28:23.868979 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:28:23.869010 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 13:28:23.869034 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:28:23.869045 systemd-journald[193]: Journal started Dec 13 13:28:23.869065 systemd-journald[193]: Runtime Journal (/run/log/journal/68768b4a280c447d9f3cd4ca9e5239fb) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:28:23.869847 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:28:23.871562 systemd-modules-load[195]: Inserted module 'overlay' Dec 13 13:28:23.905473 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:28:23.905491 kernel: Bridge firewalling registered Dec 13 13:28:23.897658 systemd-modules-load[195]: Inserted module 'br_netfilter' Dec 13 13:28:23.904025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:28:23.905665 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:23.907625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:28:23.921966 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:28:23.923026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:23.926364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:28:23.931945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:28:23.934294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:28:23.938365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:28:23.941088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:23.944483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:28:23.946595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:28:23.950852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:28:23.959390 dracut-cmdline[230]: dracut-dracut-053 Dec 13 13:28:23.962558 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:28:23.985222 systemd-resolved[232]: Positive Trust Anchors: Dec 13 13:28:23.985238 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:28:23.985269 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:28:23.987647 systemd-resolved[232]: Defaulting to hostname 'linux'. Dec 13 13:28:23.988711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:28:23.994250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:28:24.048858 kernel: SCSI subsystem initialized Dec 13 13:28:24.057847 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:28:24.068857 kernel: iscsi: registered transport (tcp) Dec 13 13:28:24.088849 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:28:24.088874 kernel: QLogic iSCSI HBA Driver Dec 13 13:28:24.130508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:28:24.139940 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:28:24.163847 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:28:24.163878 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:28:24.165379 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:28:24.204847 kernel: raid6: avx2x4 gen() 30336 MB/s Dec 13 13:28:24.221848 kernel: raid6: avx2x2 gen() 30821 MB/s Dec 13 13:28:24.238924 kernel: raid6: avx2x1 gen() 25999 MB/s Dec 13 13:28:24.238937 kernel: raid6: using algorithm avx2x2 gen() 30821 MB/s Dec 13 13:28:24.256943 kernel: raid6: .... xor() 19973 MB/s, rmw enabled Dec 13 13:28:24.256963 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:28:24.276846 kernel: xor: automatically using best checksumming function avx Dec 13 13:28:24.418855 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:28:24.430014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:28:24.442994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:28:24.455041 systemd-udevd[415]: Using default interface naming scheme 'v255'. Dec 13 13:28:24.459795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:28:24.466975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:28:24.481878 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Dec 13 13:28:24.510761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:28:24.517989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:28:24.580788 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:28:24.590032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:28:24.602490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:28:24.605103 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:28:24.607539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:28:24.609789 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:28:24.619051 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:28:24.624907 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:28:24.628849 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:28:24.654166 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:28:24.654459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:28:24.654495 kernel: GPT:9289727 != 19775487 Dec 13 13:28:24.654519 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:28:24.654537 kernel: GPT:9289727 != 19775487 Dec 13 13:28:24.654557 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:28:24.654579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:28:24.634505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:28:24.649263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:28:24.658138 kernel: libata version 3.00 loaded. Dec 13 13:28:24.649405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:28:24.660890 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:28:24.660911 kernel: AES CTR mode by8 optimization enabled Dec 13 13:28:24.651446 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:28:24.652596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:28:24.652851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:24.654806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:24.668128 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:28:24.699599 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:28:24.699619 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:28:24.699770 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:28:24.699930 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (462) Dec 13 13:28:24.699943 kernel: scsi host0: ahci Dec 13 13:28:24.700093 kernel: scsi host1: ahci Dec 13 13:28:24.700246 kernel: scsi host2: ahci Dec 13 13:28:24.700389 kernel: scsi host3: ahci Dec 13 13:28:24.702111 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Dec 13 13:28:24.702141 kernel: scsi host4: ahci Dec 13 13:28:24.702333 kernel: scsi host5: ahci Dec 13 13:28:24.702522 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 13:28:24.702539 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 13:28:24.702553 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 13:28:24.702567 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 13:28:24.702581 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 13:28:24.702595 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 13:28:24.672086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:24.697060 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:28:24.733987 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:28:24.736592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:24.751274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:28:24.752516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:28:24.760358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:28:24.771957 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:28:24.774950 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:28:24.782102 disk-uuid[556]: Primary Header is updated. Dec 13 13:28:24.782102 disk-uuid[556]: Secondary Entries is updated. Dec 13 13:28:24.782102 disk-uuid[556]: Secondary Header is updated. Dec 13 13:28:24.786856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:28:24.791000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:28:24.798318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:28:25.008858 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:28:25.008954 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:28:25.008966 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:28:25.008976 kernel: ata3.00: applying bridge limits Dec 13 13:28:25.010871 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:28:25.010899 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:28:25.011839 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:28:25.011855 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:28:25.012849 kernel: ata3.00: configured for UDMA/100 Dec 13 13:28:25.013856 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:28:25.060399 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:28:25.071533 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:28:25.071552 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:28:25.790962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:28:25.791019 disk-uuid[558]: The operation has completed successfully. Dec 13 13:28:25.826013 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:28:25.826159 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:28:25.850971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:28:25.853736 sh[593]: Success Dec 13 13:28:25.865955 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:28:25.898405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:28:25.911123 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:28:25.913254 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:28:25.925098 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:28:25.925128 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:28:25.925139 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:28:25.926136 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:28:25.926870 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:28:25.931361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:28:25.933730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:28:25.952999 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:28:25.954265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:28:25.963998 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:28:25.964034 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:28:25.964049 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:28:25.966867 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:28:25.975488 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:28:25.977406 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:28:25.986180 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:28:25.991982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:28:26.049041 ignition[688]: Ignition 2.20.0 Dec 13 13:28:26.049054 ignition[688]: Stage: fetch-offline Dec 13 13:28:26.049110 ignition[688]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:26.049122 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:26.049254 ignition[688]: parsed url from cmdline: "" Dec 13 13:28:26.049260 ignition[688]: no config URL provided Dec 13 13:28:26.049267 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:28:26.049278 ignition[688]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:28:26.049312 ignition[688]: op(1): [started] loading QEMU firmware config module Dec 13 13:28:26.049319 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:28:26.060145 ignition[688]: op(1): [finished] loading QEMU firmware config module Dec 13 13:28:26.074083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:28:26.087961 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:28:26.103788 ignition[688]: parsing config with SHA512: 48f83a0705f620ab8b876b2434b647b45af8791910cf680bfd72ea12e566a20738e0901398a3a6933f3130cb18ac869e0aa204cd3fea31b0ef03e982a55f1a77 Dec 13 13:28:26.107713 unknown[688]: fetched base config from "system" Dec 13 13:28:26.107727 unknown[688]: fetched user config from "qemu" Dec 13 13:28:26.108193 ignition[688]: fetch-offline: fetch-offline passed Dec 13 13:28:26.108265 ignition[688]: Ignition finished successfully Dec 13 13:28:26.112034 systemd-networkd[782]: lo: Link UP Dec 13 13:28:26.112043 systemd-networkd[782]: lo: Gained carrier Dec 13 13:28:26.113935 systemd-networkd[782]: Enumeration completed Dec 13 13:28:26.114272 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:28:26.114392 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:26.114397 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:28:26.114733 systemd[1]: Reached target network.target - Network. Dec 13 13:28:26.115792 systemd-networkd[782]: eth0: Link UP Dec 13 13:28:26.115796 systemd-networkd[782]: eth0: Gained carrier Dec 13 13:28:26.115804 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:26.123525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:28:26.124555 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:28:26.133002 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:28:26.139897 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:28:26.145361 ignition[785]: Ignition 2.20.0 Dec 13 13:28:26.145372 ignition[785]: Stage: kargs Dec 13 13:28:26.145548 ignition[785]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:26.145561 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:26.146475 ignition[785]: kargs: kargs passed Dec 13 13:28:26.146520 ignition[785]: Ignition finished successfully Dec 13 13:28:26.152803 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:28:26.165008 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:28:26.175483 ignition[794]: Ignition 2.20.0 Dec 13 13:28:26.175494 ignition[794]: Stage: disks Dec 13 13:28:26.175679 ignition[794]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:26.175692 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:26.176650 ignition[794]: disks: disks passed Dec 13 13:28:26.176702 ignition[794]: Ignition finished successfully Dec 13 13:28:26.182014 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:28:26.182680 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:28:26.185054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:28:26.187288 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:28:26.187671 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:28:26.188180 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:28:26.198945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:28:26.211664 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:28:26.217285 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:28:26.232908 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:28:26.313849 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:28:26.314539 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:28:26.315634 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:28:26.329901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:28:26.331794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:28:26.333309 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:28:26.333358 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:28:26.343525 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Dec 13 13:28:26.343552 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:28:26.343567 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:28:26.333385 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:28:26.346615 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:28:26.339578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:28:26.348440 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:28:26.352944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:28:26.355075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:28:26.382714 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:28:26.387315 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:28:26.390762 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:28:26.393893 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:28:26.473246 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:28:26.487911 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:28:26.489029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:28:26.498855 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:28:26.512662 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:28:26.521459 ignition[926]: INFO : Ignition 2.20.0 Dec 13 13:28:26.521459 ignition[926]: INFO : Stage: mount Dec 13 13:28:26.523096 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:26.523096 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:26.523096 ignition[926]: INFO : mount: mount passed Dec 13 13:28:26.523096 ignition[926]: INFO : Ignition finished successfully Dec 13 13:28:26.528378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:28:26.534954 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:28:26.924480 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:28:26.938004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:28:26.945317 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Dec 13 13:28:26.945341 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:28:26.945353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:28:26.946842 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:28:26.948842 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:28:26.950355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:28:26.967546 ignition[956]: INFO : Ignition 2.20.0 Dec 13 13:28:26.967546 ignition[956]: INFO : Stage: files Dec 13 13:28:26.969280 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:26.969280 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:26.971782 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:28:26.973027 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:28:26.973027 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:28:26.976390 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:28:26.977773 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:28:26.979299 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 13:28:26.980378 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:28:26.981791 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:28:26.983792 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:28:27.020104 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:28:27.084342 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:28:27.086566 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:28:27.086566 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:28:27.448944 systemd-networkd[782]: eth0: Gained IPv6LL Dec 13 13:28:27.593747 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:28:27.700170 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:28:27.702127 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:28:27.704024 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:28:27.705908 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:28:27.707930 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:28:27.709853 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:28:27.711819 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:28:27.713769 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:28:27.715739 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:28:27.717881 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:28:27.719974 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:28:27.721973 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:28:27.724818 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:28:27.727592 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:28:27.729963 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:28:28.038495 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:28:28.334861 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:28:28.334861 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:28:28.338735 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:28:28.341138 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:28:28.341138 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:28:28.341138 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:28:28.345890 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:28:28.348050 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:28:28.348050 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:28:28.348050 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:28:28.371280 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:28:28.378180 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:28:28.380021 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:28:28.380021 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:28:28.380021 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:28:28.380021 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:28:28.380021 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:28:28.380021 ignition[956]: INFO : files: files passed Dec 13 13:28:28.380021 ignition[956]: INFO : Ignition finished successfully Dec 13 13:28:28.391685 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:28:28.402932 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:28:28.403822 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:28:28.411429 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:28:28.411545 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:28:28.414679 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:28:28.416106 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:28:28.416106 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:28:28.420534 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:28:28.423463 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:28:28.424247 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:28:28.432964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:28:28.456505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:28:28.456633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:28:28.457224 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:28:28.460384 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:28:28.462367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:28:28.470963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:28:28.484548 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:28:28.486438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:28:28.500798 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:28:28.501233 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:28:28.501606 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:28:28.502138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:28:28.502275 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:28:28.509283 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:28:28.511461 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:28:28.512292 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:28:28.514584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:28:28.516794 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:28:28.518942 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:28:28.521290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:28:28.523301 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:28:28.525700 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:28:28.527677 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:28:28.529583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:28:28.529714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:28:28.532748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:28:28.533340 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:28:28.536175 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:28:28.539019 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:28:28.541632 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:28:28.541764 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:28:28.544737 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:28:28.544882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:28:28.545478 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:28:28.548386 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:28:28.552888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:28:28.553413 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:28:28.556303 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:28:28.558102 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:28:28.558211 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:28:28.560247 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:28:28.560350 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:28:28.562220 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:28:28.562348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:28:28.563937 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:28:28.564060 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:28:28.576970 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:28:28.577249 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:28:28.577357 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:28:28.579934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:28:28.583502 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:28:28.583620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:28:28.590153 ignition[1010]: INFO : Ignition 2.20.0 Dec 13 13:28:28.590153 ignition[1010]: INFO : Stage: umount Dec 13 13:28:28.590153 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:28:28.590153 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:28:28.590153 ignition[1010]: INFO : umount: umount passed Dec 13 13:28:28.586053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:28:28.595607 ignition[1010]: INFO : Ignition finished successfully Dec 13 13:28:28.586207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:28:28.595270 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:28:28.595383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:28:28.596636 systemd[1]: Stopped target network.target - Network. Dec 13 13:28:28.598469 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:28:28.598582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:28:28.600444 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:28:28.600540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:28:28.602410 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:28:28.602504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:28:28.604328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:28:28.604426 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:28:28.606315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:28:28.608208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:28:28.611536 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:28:28.613104 systemd-networkd[782]: eth0: DHCPv6 lease lost Dec 13 13:28:28.614620 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:28:28.614769 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:28:28.616362 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:28:28.616427 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:28:28.637106 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:28:28.638163 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:28:28.638272 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:28:28.640842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:28:28.646099 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:28:28.646226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:28:28.649753 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:28:28.649906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:28:28.663314 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:28:28.663542 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:28:28.666601 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:28:28.666713 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:28:28.671531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:28:28.671609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:28:28.674286 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:28:28.674331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:28:28.676944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:28:28.677009 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:28:28.679895 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:28:28.679946 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:28:28.682234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:28:28.682292 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:28:28.700136 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:28:28.702993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:28:28.703073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:28.706873 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:28:28.708173 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:28:28.710948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:28:28.711034 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:28:28.714907 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:28:28.714965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:28:28.718484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:28:28.719524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:28.722305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:28:28.723458 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:28:28.827434 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:28:28.828767 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:28:28.831621 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:28:28.834340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:28:28.835638 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:28:28.852083 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:28:28.859380 systemd[1]: Switching root. Dec 13 13:28:28.893303 systemd-journald[193]: Journal stopped Dec 13 13:28:30.176269 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 13:28:30.176362 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:28:30.176380 kernel: SELinux: policy capability open_perms=1 Dec 13 13:28:30.176392 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:28:30.176403 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:28:30.176422 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:28:30.176434 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:28:30.176445 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:28:30.176456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:28:30.176468 kernel: audit: type=1403 audit(1734096509.454:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:28:30.176487 systemd[1]: Successfully loaded SELinux policy in 40.833ms. Dec 13 13:28:30.176506 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.588ms. Dec 13 13:28:30.176520 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:28:30.176535 systemd[1]: Detected virtualization kvm. Dec 13 13:28:30.176547 systemd[1]: Detected architecture x86-64. Dec 13 13:28:30.176559 systemd[1]: Detected first boot. Dec 13 13:28:30.176571 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:28:30.176584 zram_generator::config[1055]: No configuration found. Dec 13 13:28:30.176598 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:28:30.176610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:28:30.176633 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:28:30.176652 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:28:30.176668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:28:30.176680 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:28:30.176692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:28:30.176704 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:28:30.176716 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:28:30.176729 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:28:30.176741 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:28:30.176753 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:28:30.176768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:28:30.176780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:28:30.176793 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:28:30.176805 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:28:30.176818 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:28:30.176851 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:28:30.176865 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:28:30.176878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:28:30.176890 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:28:30.176905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:28:30.176924 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:28:30.176937 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:28:30.176949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:28:30.176962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:28:30.176973 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:28:30.176985 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:28:30.176997 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:28:30.177012 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:28:30.177030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:28:30.177045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:28:30.177057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:28:30.177075 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:28:30.177087 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:28:30.177099 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:28:30.177111 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:28:30.177123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:30.177138 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:28:30.177150 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:28:30.177162 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:28:30.177175 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:28:30.177187 systemd[1]: Reached target machines.target - Containers. Dec 13 13:28:30.177199 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:28:30.177211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:30.177223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:28:30.177237 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:28:30.177250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:28:30.177262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:28:30.177274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:28:30.177285 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:28:30.177299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:28:30.177312 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:28:30.177324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:28:30.177336 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:28:30.177351 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:28:30.177363 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:28:30.177375 kernel: loop: module loaded Dec 13 13:28:30.177386 kernel: fuse: init (API version 7.39) Dec 13 13:28:30.177398 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:28:30.177411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:28:30.177423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:28:30.177435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:28:30.177447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:28:30.177461 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:28:30.177473 systemd[1]: Stopped verity-setup.service. Dec 13 13:28:30.177486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:30.177533 systemd-journald[1125]: Collecting audit messages is disabled. Dec 13 13:28:30.177556 kernel: ACPI: bus type drm_connector registered Dec 13 13:28:30.177568 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:28:30.177581 systemd-journald[1125]: Journal started Dec 13 13:28:30.177606 systemd-journald[1125]: Runtime Journal (/run/log/journal/68768b4a280c447d9f3cd4ca9e5239fb) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:28:29.958953 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:28:29.976670 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:28:29.977130 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:28:30.179539 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:28:30.180287 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:28:30.181580 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:28:30.182729 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:28:30.184094 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:28:30.185332 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:28:30.186589 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:28:30.188173 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:28:30.189660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:28:30.189850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:28:30.191361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:28:30.191534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:28:30.192992 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:28:30.193162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:28:30.194526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:28:30.194692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:28:30.196464 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:28:30.196633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:28:30.198041 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:28:30.198203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:28:30.199671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:28:30.201084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:28:30.202750 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:28:30.218670 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:28:30.226990 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:28:30.229304 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:28:30.230484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:28:30.230516 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:28:30.232502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:28:30.234768 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:28:30.237374 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:28:30.238576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:30.240958 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:28:30.243903 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:28:30.245687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:28:30.247020 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:28:30.248955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:28:30.250193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:30.256925 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:28:30.261064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:28:30.264241 systemd-journald[1125]: Time spent on flushing to /var/log/journal/68768b4a280c447d9f3cd4ca9e5239fb is 28.911ms for 954 entries. Dec 13 13:28:30.264241 systemd-journald[1125]: System Journal (/var/log/journal/68768b4a280c447d9f3cd4ca9e5239fb) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:28:30.307859 systemd-journald[1125]: Received client request to flush runtime journal. Dec 13 13:28:30.307983 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:28:30.263758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:28:30.265925 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:28:30.267784 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:28:30.270127 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:28:30.274547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:28:30.278092 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:28:30.303614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:28:30.311093 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:28:30.314227 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:28:30.316230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:30.322995 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:28:30.324533 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:28:30.330609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:28:30.331323 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:28:30.337397 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:28:30.354304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:28:30.356844 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 13:28:30.373896 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Dec 13 13:28:30.373940 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Dec 13 13:28:30.380916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:28:30.389924 kernel: loop2: detected capacity change from 0 to 141000 Dec 13 13:28:30.430853 kernel: loop3: detected capacity change from 0 to 138184 Dec 13 13:28:30.443874 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 13:28:30.453845 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:28:30.464199 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:28:30.464756 (sd-merge)[1196]: Merged extensions into '/usr'. Dec 13 13:28:30.469999 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:28:30.470018 systemd[1]: Reloading... Dec 13 13:28:30.532013 zram_generator::config[1222]: No configuration found. Dec 13 13:28:30.569861 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:28:30.660293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:30.708081 systemd[1]: Reloading finished in 237 ms. Dec 13 13:28:30.745596 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:28:30.747124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:28:30.756967 systemd[1]: Starting ensure-sysext.service... Dec 13 13:28:30.758800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:28:30.766508 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:28:30.766527 systemd[1]: Reloading... Dec 13 13:28:30.781210 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:28:30.781497 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:28:30.782487 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:28:30.782781 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 13 13:28:30.782873 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Dec 13 13:28:30.786965 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:28:30.786980 systemd-tmpfiles[1260]: Skipping /boot Dec 13 13:28:30.804754 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:28:30.804774 systemd-tmpfiles[1260]: Skipping /boot Dec 13 13:28:30.825877 zram_generator::config[1288]: No configuration found. Dec 13 13:28:30.940648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:30.988975 systemd[1]: Reloading finished in 222 ms. Dec 13 13:28:31.009001 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:28:31.021257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:28:31.030385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:31.033176 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:28:31.037103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:28:31.041517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:28:31.045625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:28:31.049513 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:28:31.054570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:31.054744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:31.057725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:28:31.064579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:28:31.068851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:28:31.070258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:31.072574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:28:31.073987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:31.075034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:28:31.075454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:28:31.078232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:28:31.078480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:28:31.087973 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:28:31.090655 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:28:31.090685 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Dec 13 13:28:31.090940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:28:31.097508 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:28:31.097703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:28:31.105378 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:28:31.107180 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:28:31.111624 augenrules[1360]: No rules Dec 13 13:28:31.113233 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:31.113467 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:31.116561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:28:31.121250 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:31.133120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:31.134406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:28:31.137413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:28:31.141028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:28:31.146083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:28:31.149264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:28:31.150732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:28:31.151965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:28:31.152748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:28:31.154650 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:28:31.157456 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:28:31.166920 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Dec 13 13:28:31.169266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:28:31.169462 augenrules[1368]: /sbin/augenrules: No change Dec 13 13:28:31.169461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:28:31.171161 systemd[1]: Finished ensure-sysext.service. Dec 13 13:28:31.172448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:28:31.172624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:28:31.174420 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:28:31.174591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:28:31.176552 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:28:31.176725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:28:31.177867 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Dec 13 13:28:31.186864 augenrules[1418]: No rules Dec 13 13:28:31.186040 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:31.186759 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:31.203079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:28:31.204331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:28:31.204382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:28:31.207693 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:28:31.208786 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:28:31.209943 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:28:31.210861 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1373) Dec 13 13:28:31.229469 systemd-resolved[1329]: Positive Trust Anchors: Dec 13 13:28:31.229758 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:28:31.229844 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:28:31.234495 systemd-resolved[1329]: Defaulting to hostname 'linux'. Dec 13 13:28:31.236291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:28:31.239254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:28:31.251693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:28:31.268075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:28:31.274229 systemd-networkd[1427]: lo: Link UP Dec 13 13:28:31.274242 systemd-networkd[1427]: lo: Gained carrier Dec 13 13:28:31.277266 systemd-networkd[1427]: Enumeration completed Dec 13 13:28:31.277371 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:28:31.278545 systemd[1]: Reached target network.target - Network. Dec 13 13:28:31.279255 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:31.279267 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:28:31.280502 systemd-networkd[1427]: eth0: Link UP Dec 13 13:28:31.280513 systemd-networkd[1427]: eth0: Gained carrier Dec 13 13:28:31.280527 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:28:31.285040 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:28:31.289069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:28:31.290537 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:28:31.292389 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:28:31.292737 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:28:31.293639 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:28:31.958740 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:28:31.959866 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:28:31.960052 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:28:31.295445 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Dec 13 13:28:31.958538 systemd-resolved[1329]: Clock change detected. Flushing caches. Dec 13 13:28:31.959132 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:28:31.959181 systemd-timesyncd[1431]: Initial clock synchronization to Fri 2024-12-13 13:28:31.958505 UTC. Dec 13 13:28:31.959503 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:28:31.966736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:28:31.999726 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:28:31.995974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:28:32.072734 kernel: kvm_amd: TSC scaling supported Dec 13 13:28:32.072876 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:28:32.072899 kernel: kvm_amd: Nested Paging enabled Dec 13 13:28:32.072956 kernel: kvm_amd: LBR virtualization supported Dec 13 13:28:32.072977 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:28:32.072999 kernel: kvm_amd: Virtual GIF supported Dec 13 13:28:32.093728 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:28:32.096666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:28:32.128890 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:28:32.139936 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:28:32.149145 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:28:32.178562 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:28:32.180031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:28:32.181140 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:28:32.182289 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:28:32.183516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:28:32.184933 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:28:32.186076 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:28:32.187296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:28:32.188501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:28:32.188520 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:28:32.189400 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:28:32.191155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:28:32.193800 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:28:32.208271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:28:32.210884 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:28:32.212505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:28:32.213653 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:28:32.214623 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:28:32.215598 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:28:32.215627 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:28:32.216597 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:28:32.218653 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:28:32.222701 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:28:32.224819 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:28:32.225757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:28:32.226791 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:28:32.230855 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:28:32.231982 jq[1458]: false Dec 13 13:28:32.235880 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:28:32.238970 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:28:32.242568 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:28:32.247519 extend-filesystems[1459]: Found loop3 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found loop4 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found loop5 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found sr0 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda1 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda2 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda3 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found usr Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda4 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda6 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda7 Dec 13 13:28:32.247519 extend-filesystems[1459]: Found vda9 Dec 13 13:28:32.247519 extend-filesystems[1459]: Checking size of /dev/vda9 Dec 13 13:28:32.311219 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:28:32.248815 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:28:32.311348 extend-filesystems[1459]: Resized partition /dev/vda9 Dec 13 13:28:32.292961 dbus-daemon[1457]: [system] SELinux support is enabled Dec 13 13:28:32.336297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1373) Dec 13 13:28:32.336337 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:28:32.250240 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:28:32.337162 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:28:32.337162 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:28:32.337162 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:28:32.337162 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:28:32.251382 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:28:32.357058 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Dec 13 13:28:32.252041 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:28:32.362527 jq[1476]: true Dec 13 13:28:32.256482 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:28:32.363684 update_engine[1473]: I20241213 13:28:32.323280 1473 main.cc:92] Flatcar Update Engine starting Dec 13 13:28:32.363684 update_engine[1473]: I20241213 13:28:32.324539 1473 update_check_scheduler.cc:74] Next update check in 10m33s Dec 13 13:28:32.266280 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:28:32.268375 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:28:32.366112 tar[1482]: linux-amd64/helm Dec 13 13:28:32.268575 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:28:32.268913 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:28:32.370493 jq[1483]: true Dec 13 13:28:32.269103 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:28:32.272308 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:28:32.272533 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:28:32.282279 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:28:32.291410 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:28:32.298028 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:28:32.324302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:28:32.324325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:28:32.327312 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:28:32.327333 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:28:32.328698 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:28:32.344953 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:28:32.346833 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:28:32.347070 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:28:32.358066 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:28:32.358092 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:28:32.359891 systemd-logind[1471]: New seat seat0. Dec 13 13:28:32.366944 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:28:32.379839 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:28:32.381899 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:28:32.386648 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:28:32.404156 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:28:32.478468 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:28:32.492182 containerd[1484]: time="2024-12-13T13:28:32.492107329Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:28:32.502283 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:28:32.510970 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:28:32.513065 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:51252.service - OpenSSH per-connection server daemon (10.0.0.1:51252). Dec 13 13:28:32.520822 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:28:32.521141 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:28:32.523083 containerd[1484]: time="2024-12-13T13:28:32.523046883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.525742148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.525800318Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.525819554Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526003158Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526018427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526084310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526096293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526278585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526292811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526304774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527044 containerd[1484]: time="2024-12-13T13:28:32.526313630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526401535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526624003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526762212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526775086Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526868251Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:28:32.527283 containerd[1484]: time="2024-12-13T13:28:32.526920348Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:28:32.531035 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536193903Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536269615Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536286947Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536303859Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536318096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536482063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536689963Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536891962Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536907231Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536920656Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536932779Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536944912Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536955952Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.537724 containerd[1484]: time="2024-12-13T13:28:32.536969688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.536986850Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537001007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537018790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537031424Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537050730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537063674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537075777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537087810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537098860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537111183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537122184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537133746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537145859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538002 containerd[1484]: time="2024-12-13T13:28:32.537160556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537171777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537183599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537196644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537210880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537228924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537241328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537253470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537295519Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537310958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537320456Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537332629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537341475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537352666Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:28:32.538253 containerd[1484]: time="2024-12-13T13:28:32.537372574Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:28:32.538508 containerd[1484]: time="2024-12-13T13:28:32.537399955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:28:32.538529 containerd[1484]: time="2024-12-13T13:28:32.537651988Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:28:32.538529 containerd[1484]: time="2024-12-13T13:28:32.537690650Z" level=info msg="Connect containerd service" Dec 13 13:28:32.538529 containerd[1484]: time="2024-12-13T13:28:32.537742478Z" level=info msg="using legacy CRI server" Dec 13 13:28:32.538529 containerd[1484]: time="2024-12-13T13:28:32.537750292Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:28:32.538529 containerd[1484]: time="2024-12-13T13:28:32.537861721Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:28:32.538747 containerd[1484]: time="2024-12-13T13:28:32.538550744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.538820229Z" level=info msg="Start subscribing containerd event" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.538892194Z" level=info msg="Start recovering state" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.538966614Z" level=info msg="Start event monitor" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.538992051Z" level=info msg="Start snapshots syncer" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.539002962Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.539010446Z" level=info msg="Start streaming server" Dec 13 13:28:32.539129 containerd[1484]: time="2024-12-13T13:28:32.539087731Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:28:32.539272 containerd[1484]: time="2024-12-13T13:28:32.539165797Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:28:32.539299 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:28:32.541809 containerd[1484]: time="2024-12-13T13:28:32.539572190Z" level=info msg="containerd successfully booted in 0.049040s" Dec 13 13:28:32.543780 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:28:32.553142 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:28:32.555617 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:28:32.556918 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:28:32.577306 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 51252 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:32.579520 sshd-session[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:32.589536 systemd-logind[1471]: New session 1 of user core. Dec 13 13:28:32.591175 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:28:32.602948 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:28:32.616287 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:28:32.627122 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:28:32.631236 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:28:32.731685 tar[1482]: linux-amd64/LICENSE Dec 13 13:28:32.731822 tar[1482]: linux-amd64/README.md Dec 13 13:28:32.744962 systemd[1550]: Queued start job for default target default.target. Dec 13 13:28:32.747109 systemd[1550]: Created slice app.slice - User Application Slice. Dec 13 13:28:32.747139 systemd[1550]: Reached target paths.target - Paths. Dec 13 13:28:32.747153 systemd[1550]: Reached target timers.target - Timers. Dec 13 13:28:32.748724 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:28:32.750976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:28:32.784290 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:28:32.784425 systemd[1550]: Reached target sockets.target - Sockets. Dec 13 13:28:32.784440 systemd[1550]: Reached target basic.target - Basic System. Dec 13 13:28:32.784478 systemd[1550]: Reached target default.target - Main User Target. Dec 13 13:28:32.784517 systemd[1550]: Startup finished in 145ms. Dec 13 13:28:32.785083 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:28:32.802081 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:28:32.865862 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:51254.service - OpenSSH per-connection server daemon (10.0.0.1:51254). Dec 13 13:28:32.911967 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 51254 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:32.913393 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:32.917834 systemd-logind[1471]: New session 2 of user core. Dec 13 13:28:32.931829 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:28:32.987499 sshd[1567]: Connection closed by 10.0.0.1 port 51254 Dec 13 13:28:32.987953 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:33.002326 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:51254.service: Deactivated successfully. Dec 13 13:28:33.003849 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:28:33.005341 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:28:33.024944 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:51270.service - OpenSSH per-connection server daemon (10.0.0.1:51270). Dec 13 13:28:33.027307 systemd-logind[1471]: Removed session 2. Dec 13 13:28:33.062528 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51270 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:33.063779 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:33.067580 systemd-logind[1471]: New session 3 of user core. Dec 13 13:28:33.080880 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:28:33.102846 systemd-networkd[1427]: eth0: Gained IPv6LL Dec 13 13:28:33.106039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:28:33.108069 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:28:33.127944 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:28:33.130475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:33.132851 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:28:33.145161 sshd[1574]: Connection closed by 10.0.0.1 port 51270 Dec 13 13:28:33.146752 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:33.150242 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:51270.service: Deactivated successfully. Dec 13 13:28:33.150515 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:28:33.152735 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:28:33.155148 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:28:33.156905 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:28:33.157103 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:28:33.160196 systemd-logind[1471]: Removed session 3. Dec 13 13:28:33.160221 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:28:33.733757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:33.735262 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:28:33.736533 systemd[1]: Startup finished in 676ms (kernel) + 5.759s (initrd) + 3.660s (userspace) = 10.096s. Dec 13 13:28:33.745508 agetty[1547]: failed to open credentials directory Dec 13 13:28:33.769064 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:33.776899 agetty[1546]: failed to open credentials directory Dec 13 13:28:34.517932 kubelet[1600]: E1213 13:28:34.517866 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:34.521797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:34.522035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:34.522381 systemd[1]: kubelet.service: Consumed 1.256s CPU time. Dec 13 13:28:43.159685 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:57944.service - OpenSSH per-connection server daemon (10.0.0.1:57944). Dec 13 13:28:43.203799 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 57944 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.205273 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.209114 systemd-logind[1471]: New session 4 of user core. Dec 13 13:28:43.222828 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:28:43.275782 sshd[1616]: Connection closed by 10.0.0.1 port 57944 Dec 13 13:28:43.276117 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:43.283403 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:57944.service: Deactivated successfully. Dec 13 13:28:43.285102 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:28:43.286477 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:28:43.290946 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:57960.service - OpenSSH per-connection server daemon (10.0.0.1:57960). Dec 13 13:28:43.291752 systemd-logind[1471]: Removed session 4. Dec 13 13:28:43.334671 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 57960 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.336417 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.340634 systemd-logind[1471]: New session 5 of user core. Dec 13 13:28:43.361833 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:28:43.410957 sshd[1623]: Connection closed by 10.0.0.1 port 57960 Dec 13 13:28:43.411274 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:43.421496 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:57960.service: Deactivated successfully. Dec 13 13:28:43.423081 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:28:43.424620 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:28:43.431116 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:57964.service - OpenSSH per-connection server daemon (10.0.0.1:57964). Dec 13 13:28:43.432034 systemd-logind[1471]: Removed session 5. Dec 13 13:28:43.468194 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 57964 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.469828 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.473798 systemd-logind[1471]: New session 6 of user core. Dec 13 13:28:43.489844 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:28:43.543720 sshd[1630]: Connection closed by 10.0.0.1 port 57964 Dec 13 13:28:43.544200 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:43.551433 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:57964.service: Deactivated successfully. Dec 13 13:28:43.553172 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:28:43.554610 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:28:43.555818 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:57978.service - OpenSSH per-connection server daemon (10.0.0.1:57978). Dec 13 13:28:43.556452 systemd-logind[1471]: Removed session 6. Dec 13 13:28:43.611774 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 57978 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.613081 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.616813 systemd-logind[1471]: New session 7 of user core. Dec 13 13:28:43.626822 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:28:43.685372 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:28:43.685735 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:43.706164 sudo[1638]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:43.707839 sshd[1637]: Connection closed by 10.0.0.1 port 57978 Dec 13 13:28:43.708321 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:43.717535 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:57978.service: Deactivated successfully. Dec 13 13:28:43.719249 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:28:43.720985 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:28:43.722249 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:57984.service - OpenSSH per-connection server daemon (10.0.0.1:57984). Dec 13 13:28:43.723113 systemd-logind[1471]: Removed session 7. Dec 13 13:28:43.778121 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 57984 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.779637 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.783923 systemd-logind[1471]: New session 8 of user core. Dec 13 13:28:43.797897 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:28:43.852513 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:28:43.852861 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:43.857054 sudo[1647]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:43.863194 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:28:43.863538 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:43.889960 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:43.921018 augenrules[1669]: No rules Dec 13 13:28:43.922919 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:43.923153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:43.924347 sudo[1646]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:43.925741 sshd[1645]: Connection closed by 10.0.0.1 port 57984 Dec 13 13:28:43.926080 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:43.941638 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:57984.service: Deactivated successfully. Dec 13 13:28:43.943364 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:28:43.944934 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:28:43.954009 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:57986.service - OpenSSH per-connection server daemon (10.0.0.1:57986). Dec 13 13:28:43.954812 systemd-logind[1471]: Removed session 8. Dec 13 13:28:43.992135 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 57986 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:28:43.993659 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:43.997530 systemd-logind[1471]: New session 9 of user core. Dec 13 13:28:44.011813 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:28:44.064685 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:28:44.065064 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:44.336996 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:28:44.337078 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:28:44.576384 dockerd[1700]: time="2024-12-13T13:28:44.576314087Z" level=info msg="Starting up" Dec 13 13:28:44.577339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:28:44.584881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:44.851417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:44.855778 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:44.912519 kubelet[1732]: E1213 13:28:44.912469 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:44.918893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:44.919093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:44.986901 dockerd[1700]: time="2024-12-13T13:28:44.986866596Z" level=info msg="Loading containers: start." Dec 13 13:28:45.148735 kernel: Initializing XFRM netlink socket Dec 13 13:28:45.226315 systemd-networkd[1427]: docker0: Link UP Dec 13 13:28:45.258084 dockerd[1700]: time="2024-12-13T13:28:45.258037746Z" level=info msg="Loading containers: done." Dec 13 13:28:45.273213 dockerd[1700]: time="2024-12-13T13:28:45.273168356Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:28:45.273370 dockerd[1700]: time="2024-12-13T13:28:45.273259688Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:28:45.273398 dockerd[1700]: time="2024-12-13T13:28:45.273369564Z" level=info msg="Daemon has completed initialization" Dec 13 13:28:45.308236 dockerd[1700]: time="2024-12-13T13:28:45.308174167Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:28:45.308465 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:28:45.949967 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck788772928-merged.mount: Deactivated successfully. Dec 13 13:28:46.029037 containerd[1484]: time="2024-12-13T13:28:46.028488391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:28:46.874490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324871378.mount: Deactivated successfully. Dec 13 13:28:48.689044 containerd[1484]: time="2024-12-13T13:28:48.688982327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.689811 containerd[1484]: time="2024-12-13T13:28:48.689770856Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 13:28:48.691029 containerd[1484]: time="2024-12-13T13:28:48.691001023Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.693455 containerd[1484]: time="2024-12-13T13:28:48.693425321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:48.694433 containerd[1484]: time="2024-12-13T13:28:48.694381033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.66585375s" Dec 13 13:28:48.694433 containerd[1484]: time="2024-12-13T13:28:48.694428933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 13:28:48.714674 containerd[1484]: time="2024-12-13T13:28:48.714640725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:28:51.223081 containerd[1484]: time="2024-12-13T13:28:51.222996894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.224256 containerd[1484]: time="2024-12-13T13:28:51.224217083Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 13:28:51.225612 containerd[1484]: time="2024-12-13T13:28:51.225577034Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.230110 containerd[1484]: time="2024-12-13T13:28:51.230043852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.231461 containerd[1484]: time="2024-12-13T13:28:51.231409674Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.516735136s" Dec 13 13:28:51.231512 containerd[1484]: time="2024-12-13T13:28:51.231457494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 13:28:51.255790 containerd[1484]: time="2024-12-13T13:28:51.255746984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:28:52.422604 containerd[1484]: time="2024-12-13T13:28:52.422547769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:52.423321 containerd[1484]: time="2024-12-13T13:28:52.423279351Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 13:28:52.424325 containerd[1484]: time="2024-12-13T13:28:52.424289746Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:52.426921 containerd[1484]: time="2024-12-13T13:28:52.426901174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:52.427807 containerd[1484]: time="2024-12-13T13:28:52.427769523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.171816463s" Dec 13 13:28:52.427807 containerd[1484]: time="2024-12-13T13:28:52.427799059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 13:28:52.473242 containerd[1484]: time="2024-12-13T13:28:52.473193916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:28:53.897189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565042876.mount: Deactivated successfully. Dec 13 13:28:54.730449 containerd[1484]: time="2024-12-13T13:28:54.730373039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.731378 containerd[1484]: time="2024-12-13T13:28:54.731331737Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 13:28:54.732596 containerd[1484]: time="2024-12-13T13:28:54.732549602Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.734978 containerd[1484]: time="2024-12-13T13:28:54.734873922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.737722 containerd[1484]: time="2024-12-13T13:28:54.736327648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.263094228s" Dec 13 13:28:54.737722 containerd[1484]: time="2024-12-13T13:28:54.736368224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:28:54.763966 containerd[1484]: time="2024-12-13T13:28:54.763911017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:28:55.124296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:28:55.134029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:55.291061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:55.295354 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:55.512128 kubelet[2028]: E1213 13:28:55.511891 2028 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:55.516752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:55.516986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:55.777872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293048800.mount: Deactivated successfully. Dec 13 13:28:56.507853 containerd[1484]: time="2024-12-13T13:28:56.507803781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:56.508613 containerd[1484]: time="2024-12-13T13:28:56.508577272Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:28:56.509854 containerd[1484]: time="2024-12-13T13:28:56.509810375Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:56.512485 containerd[1484]: time="2024-12-13T13:28:56.512448844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:56.513591 containerd[1484]: time="2024-12-13T13:28:56.513560399Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.749599439s" Dec 13 13:28:56.513591 containerd[1484]: time="2024-12-13T13:28:56.513588502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:28:56.534899 containerd[1484]: time="2024-12-13T13:28:56.534854591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:28:57.102814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480748799.mount: Deactivated successfully. Dec 13 13:28:57.108817 containerd[1484]: time="2024-12-13T13:28:57.108776665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.109598 containerd[1484]: time="2024-12-13T13:28:57.109545026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:28:57.110636 containerd[1484]: time="2024-12-13T13:28:57.110599904Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.115019 containerd[1484]: time="2024-12-13T13:28:57.113255025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:57.115555 containerd[1484]: time="2024-12-13T13:28:57.115509774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 580.431634ms" Dec 13 13:28:57.115555 containerd[1484]: time="2024-12-13T13:28:57.115547925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:28:57.137323 containerd[1484]: time="2024-12-13T13:28:57.137273767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:28:57.686495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255115610.mount: Deactivated successfully. Dec 13 13:29:00.504236 containerd[1484]: time="2024-12-13T13:29:00.504161075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.505002 containerd[1484]: time="2024-12-13T13:29:00.504953942Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 13:29:00.506298 containerd[1484]: time="2024-12-13T13:29:00.506264791Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.509255 containerd[1484]: time="2024-12-13T13:29:00.509223741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.510214 containerd[1484]: time="2024-12-13T13:29:00.510173873Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.372857616s" Dec 13 13:29:00.510255 containerd[1484]: time="2024-12-13T13:29:00.510215301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 13:29:03.084190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:03.093938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:03.113804 systemd[1]: Reloading requested from client PID 2229 ('systemctl') (unit session-9.scope)... Dec 13 13:29:03.113820 systemd[1]: Reloading... Dec 13 13:29:03.217732 zram_generator::config[2271]: No configuration found. Dec 13 13:29:03.901877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:03.979927 systemd[1]: Reloading finished in 865 ms. Dec 13 13:29:04.028746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:04.034357 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:29:04.034625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:04.044101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:04.189737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:04.195186 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:29:04.247900 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:29:04.247900 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:29:04.247900 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:29:04.248305 kubelet[2318]: I1213 13:29:04.247943 2318 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:29:04.610742 kubelet[2318]: I1213 13:29:04.610633 2318 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:29:04.610742 kubelet[2318]: I1213 13:29:04.610660 2318 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:29:04.610872 kubelet[2318]: I1213 13:29:04.610859 2318 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:29:04.624411 kubelet[2318]: I1213 13:29:04.624372 2318 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:29:04.624733 kubelet[2318]: E1213 13:29:04.624699 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.634263 kubelet[2318]: I1213 13:29:04.634199 2318 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:29:04.636185 kubelet[2318]: I1213 13:29:04.636125 2318 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:29:04.636313 kubelet[2318]: I1213 13:29:04.636157 2318 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:29:04.636414 kubelet[2318]: I1213 13:29:04.636321 2318 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:29:04.636414 kubelet[2318]: I1213 13:29:04.636331 2318 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:29:04.636468 kubelet[2318]: I1213 13:29:04.636456 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:29:04.637102 kubelet[2318]: I1213 13:29:04.637077 2318 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:29:04.637102 kubelet[2318]: I1213 13:29:04.637093 2318 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:29:04.637157 kubelet[2318]: I1213 13:29:04.637112 2318 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:29:04.637157 kubelet[2318]: I1213 13:29:04.637129 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:29:04.637454 kubelet[2318]: W1213 13:29:04.637417 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.637494 kubelet[2318]: E1213 13:29:04.637463 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.638978 kubelet[2318]: W1213 13:29:04.638934 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.638978 kubelet[2318]: E1213 13:29:04.638982 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.640764 kubelet[2318]: I1213 13:29:04.640733 2318 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:29:04.643160 kubelet[2318]: I1213 13:29:04.642326 2318 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:29:04.643160 kubelet[2318]: W1213 13:29:04.642393 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:29:04.643307 kubelet[2318]: I1213 13:29:04.643290 2318 server.go:1264] "Started kubelet" Dec 13 13:29:04.646005 kubelet[2318]: I1213 13:29:04.644521 2318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:29:04.646005 kubelet[2318]: I1213 13:29:04.645385 2318 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:29:04.646005 kubelet[2318]: I1213 13:29:04.645854 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:29:04.646187 kubelet[2318]: I1213 13:29:04.646141 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:29:04.646394 kubelet[2318]: I1213 13:29:04.646373 2318 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:29:04.647067 kubelet[2318]: E1213 13:29:04.646822 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:29:04.647067 kubelet[2318]: I1213 13:29:04.646869 2318 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:29:04.647067 kubelet[2318]: I1213 13:29:04.646952 2318 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:29:04.647067 kubelet[2318]: I1213 13:29:04.647003 2318 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:29:04.647424 kubelet[2318]: W1213 13:29:04.647273 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.647424 kubelet[2318]: E1213 13:29:04.647313 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.647746 kubelet[2318]: E1213 13:29:04.647717 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Dec 13 13:29:04.648548 kubelet[2318]: E1213 13:29:04.648456 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf9ea5af837b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:29:04.643122043 +0000 UTC m=+0.443781178,LastTimestamp:2024-12-13 13:29:04.643122043 +0000 UTC m=+0.443781178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:29:04.649104 kubelet[2318]: I1213 13:29:04.649088 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:29:04.649603 kubelet[2318]: E1213 13:29:04.649546 2318 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:29:04.649962 kubelet[2318]: I1213 13:29:04.649933 2318 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:29:04.649962 kubelet[2318]: I1213 13:29:04.649948 2318 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:29:04.662023 kubelet[2318]: I1213 13:29:04.661982 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:29:04.663222 kubelet[2318]: I1213 13:29:04.663185 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:29:04.663222 kubelet[2318]: I1213 13:29:04.663223 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:29:04.663302 kubelet[2318]: I1213 13:29:04.663245 2318 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:29:04.663302 kubelet[2318]: E1213 13:29:04.663280 2318 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:29:04.663998 kubelet[2318]: W1213 13:29:04.663814 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.663998 kubelet[2318]: E1213 13:29:04.663857 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:04.664669 kubelet[2318]: I1213 13:29:04.664649 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:29:04.664771 kubelet[2318]: I1213 13:29:04.664761 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:29:04.664841 kubelet[2318]: I1213 13:29:04.664832 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:29:04.748976 kubelet[2318]: I1213 13:29:04.748946 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:04.749305 kubelet[2318]: E1213 13:29:04.749264 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Dec 13 13:29:04.763346 kubelet[2318]: E1213 13:29:04.763315 2318 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:29:04.849258 kubelet[2318]: E1213 13:29:04.849198 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Dec 13 13:29:04.950909 kubelet[2318]: I1213 13:29:04.950862 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:04.951305 kubelet[2318]: E1213 13:29:04.951267 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Dec 13 13:29:04.964466 kubelet[2318]: E1213 13:29:04.964424 2318 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:29:04.984683 kubelet[2318]: I1213 13:29:04.984642 2318 policy_none.go:49] "None policy: Start" Dec 13 13:29:04.985409 kubelet[2318]: I1213 13:29:04.985387 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:29:04.985448 kubelet[2318]: I1213 13:29:04.985415 2318 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:29:04.992624 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:29:05.006797 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:29:05.009852 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:29:05.019518 kubelet[2318]: I1213 13:29:05.019486 2318 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:29:05.019756 kubelet[2318]: I1213 13:29:05.019722 2318 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:29:05.019931 kubelet[2318]: I1213 13:29:05.019849 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:29:05.020882 kubelet[2318]: E1213 13:29:05.020857 2318 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:29:05.249750 kubelet[2318]: E1213 13:29:05.249617 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Dec 13 13:29:05.353149 kubelet[2318]: I1213 13:29:05.353108 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:05.353478 kubelet[2318]: E1213 13:29:05.353441 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Dec 13 13:29:05.364547 kubelet[2318]: I1213 13:29:05.364511 2318 topology_manager.go:215] "Topology Admit Handler" podUID="9d2e9130dfba75db5131d7db10b63e50" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:29:05.365260 kubelet[2318]: I1213 13:29:05.365230 2318 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:29:05.366068 kubelet[2318]: I1213 13:29:05.366046 2318 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:29:05.371537 systemd[1]: Created slice kubepods-burstable-pod9d2e9130dfba75db5131d7db10b63e50.slice - libcontainer container kubepods-burstable-pod9d2e9130dfba75db5131d7db10b63e50.slice. Dec 13 13:29:05.383438 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 13:29:05.397423 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 13:29:05.452194 kubelet[2318]: I1213 13:29:05.452167 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:05.452246 kubelet[2318]: I1213 13:29:05.452195 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:05.452246 kubelet[2318]: I1213 13:29:05.452216 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:29:05.452246 kubelet[2318]: I1213 13:29:05.452233 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:05.452351 kubelet[2318]: I1213 13:29:05.452247 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:05.452351 kubelet[2318]: I1213 13:29:05.452268 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:05.452351 kubelet[2318]: I1213 13:29:05.452289 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:05.452351 kubelet[2318]: I1213 13:29:05.452309 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:05.452351 kubelet[2318]: I1213 13:29:05.452327 2318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:05.682287 kubelet[2318]: E1213 13:29:05.682254 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:05.682902 containerd[1484]: time="2024-12-13T13:29:05.682854003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d2e9130dfba75db5131d7db10b63e50,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:05.696009 kubelet[2318]: E1213 13:29:05.695979 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:05.696379 containerd[1484]: time="2024-12-13T13:29:05.696337422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:05.699517 kubelet[2318]: E1213 13:29:05.699495 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:05.699799 containerd[1484]: time="2024-12-13T13:29:05.699770018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:05.777809 kubelet[2318]: W1213 13:29:05.777765 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:05.777809 kubelet[2318]: E1213 13:29:05.777800 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:05.963379 kubelet[2318]: W1213 13:29:05.963232 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:05.963379 kubelet[2318]: E1213 13:29:05.963299 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.051091 kubelet[2318]: E1213 13:29:06.051043 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Dec 13 13:29:06.064695 kubelet[2318]: W1213 13:29:06.064626 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.064695 kubelet[2318]: E1213 13:29:06.064694 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.155647 kubelet[2318]: I1213 13:29:06.155578 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:06.156024 kubelet[2318]: E1213 13:29:06.155980 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Dec 13 13:29:06.208723 kubelet[2318]: W1213 13:29:06.208645 2318 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.208723 kubelet[2318]: E1213 13:29:06.208695 2318 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.514053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2680902744.mount: Deactivated successfully. Dec 13 13:29:06.522076 containerd[1484]: time="2024-12-13T13:29:06.522013268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:29:06.524958 containerd[1484]: time="2024-12-13T13:29:06.524924816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:29:06.526016 containerd[1484]: time="2024-12-13T13:29:06.525982611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:29:06.527837 containerd[1484]: time="2024-12-13T13:29:06.527812166Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:29:06.528528 containerd[1484]: time="2024-12-13T13:29:06.528496756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:29:06.529548 containerd[1484]: time="2024-12-13T13:29:06.529482852Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:29:06.530282 containerd[1484]: time="2024-12-13T13:29:06.530238229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:29:06.532379 containerd[1484]: time="2024-12-13T13:29:06.532350414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:29:06.534129 containerd[1484]: time="2024-12-13T13:29:06.534107348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 834.280259ms" Dec 13 13:29:06.534738 containerd[1484]: time="2024-12-13T13:29:06.534702955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 838.276198ms" Dec 13 13:29:06.535257 containerd[1484]: time="2024-12-13T13:29:06.535232855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 852.283947ms" Dec 13 13:29:06.720522 containerd[1484]: time="2024-12-13T13:29:06.720055541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:06.720522 containerd[1484]: time="2024-12-13T13:29:06.720144053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:06.720522 containerd[1484]: time="2024-12-13T13:29:06.720162890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.720522 containerd[1484]: time="2024-12-13T13:29:06.720287281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.725295 containerd[1484]: time="2024-12-13T13:29:06.723787303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:06.725295 containerd[1484]: time="2024-12-13T13:29:06.723826879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:06.725295 containerd[1484]: time="2024-12-13T13:29:06.723838542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.725295 containerd[1484]: time="2024-12-13T13:29:06.723905713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.730054 containerd[1484]: time="2024-12-13T13:29:06.729933916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:06.730054 containerd[1484]: time="2024-12-13T13:29:06.730006176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:06.730054 containerd[1484]: time="2024-12-13T13:29:06.730019261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.730368 containerd[1484]: time="2024-12-13T13:29:06.730329965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:06.799076 kubelet[2318]: E1213 13:29:06.798872 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused Dec 13 13:29:06.860018 systemd[1]: Started cri-containerd-2b2945c073e6b9e850e3bb91c86cf1031d55ac5d7445e543b9677c30c95ee421.scope - libcontainer container 2b2945c073e6b9e850e3bb91c86cf1031d55ac5d7445e543b9677c30c95ee421. Dec 13 13:29:06.864958 systemd[1]: Started cri-containerd-4260392934194bfbd1f17c954611c61946938b1f0e3f1be096b509330b78abd1.scope - libcontainer container 4260392934194bfbd1f17c954611c61946938b1f0e3f1be096b509330b78abd1. Dec 13 13:29:06.898968 systemd[1]: Started cri-containerd-b89f1aa45639ce8315b2fc43db9d57ebafd86afa0617581546cef8ca03221647.scope - libcontainer container b89f1aa45639ce8315b2fc43db9d57ebafd86afa0617581546cef8ca03221647. Dec 13 13:29:06.917598 containerd[1484]: time="2024-12-13T13:29:06.917559388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d2e9130dfba75db5131d7db10b63e50,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b2945c073e6b9e850e3bb91c86cf1031d55ac5d7445e543b9677c30c95ee421\"" Dec 13 13:29:06.919787 kubelet[2318]: E1213 13:29:06.919318 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:06.923835 containerd[1484]: time="2024-12-13T13:29:06.923791798Z" level=info msg="CreateContainer within sandbox \"2b2945c073e6b9e850e3bb91c86cf1031d55ac5d7445e543b9677c30c95ee421\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:29:06.948256 containerd[1484]: time="2024-12-13T13:29:06.948208362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"b89f1aa45639ce8315b2fc43db9d57ebafd86afa0617581546cef8ca03221647\"" Dec 13 13:29:06.949442 kubelet[2318]: E1213 13:29:06.949328 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:06.952193 containerd[1484]: time="2024-12-13T13:29:06.952138318Z" level=info msg="CreateContainer within sandbox \"b89f1aa45639ce8315b2fc43db9d57ebafd86afa0617581546cef8ca03221647\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:29:06.991377 containerd[1484]: time="2024-12-13T13:29:06.991335300Z" level=info msg="CreateContainer within sandbox \"2b2945c073e6b9e850e3bb91c86cf1031d55ac5d7445e543b9677c30c95ee421\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8871ce5483a9ba53af69e8daa323b45c720107df01699a6ac8bb8ddf2b811401\"" Dec 13 13:29:06.992029 containerd[1484]: time="2024-12-13T13:29:06.991992949Z" level=info msg="StartContainer for \"8871ce5483a9ba53af69e8daa323b45c720107df01699a6ac8bb8ddf2b811401\"" Dec 13 13:29:06.992992 containerd[1484]: time="2024-12-13T13:29:06.992953996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4260392934194bfbd1f17c954611c61946938b1f0e3f1be096b509330b78abd1\"" Dec 13 13:29:06.993524 kubelet[2318]: E1213 13:29:06.993501 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:06.995633 containerd[1484]: time="2024-12-13T13:29:06.995595238Z" level=info msg="CreateContainer within sandbox \"4260392934194bfbd1f17c954611c61946938b1f0e3f1be096b509330b78abd1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:29:06.996953 containerd[1484]: time="2024-12-13T13:29:06.996921274Z" level=info msg="CreateContainer within sandbox \"b89f1aa45639ce8315b2fc43db9d57ebafd86afa0617581546cef8ca03221647\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47d2b87dc8857087053b1e21e9bb2efac4afa819db0de7cbea1b34bcb092016b\"" Dec 13 13:29:06.997297 containerd[1484]: time="2024-12-13T13:29:06.997269712Z" level=info msg="StartContainer for \"47d2b87dc8857087053b1e21e9bb2efac4afa819db0de7cbea1b34bcb092016b\"" Dec 13 13:29:07.016430 containerd[1484]: time="2024-12-13T13:29:07.016351103Z" level=info msg="CreateContainer within sandbox \"4260392934194bfbd1f17c954611c61946938b1f0e3f1be096b509330b78abd1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"026fd361ead4f8872d7098fb4d52b2a4db68a038c9db5cdb07701529f0bbf1db\"" Dec 13 13:29:07.017540 containerd[1484]: time="2024-12-13T13:29:07.017377683Z" level=info msg="StartContainer for \"026fd361ead4f8872d7098fb4d52b2a4db68a038c9db5cdb07701529f0bbf1db\"" Dec 13 13:29:07.026877 systemd[1]: Started cri-containerd-8871ce5483a9ba53af69e8daa323b45c720107df01699a6ac8bb8ddf2b811401.scope - libcontainer container 8871ce5483a9ba53af69e8daa323b45c720107df01699a6ac8bb8ddf2b811401. Dec 13 13:29:07.030387 systemd[1]: Started cri-containerd-47d2b87dc8857087053b1e21e9bb2efac4afa819db0de7cbea1b34bcb092016b.scope - libcontainer container 47d2b87dc8857087053b1e21e9bb2efac4afa819db0de7cbea1b34bcb092016b. Dec 13 13:29:07.057825 systemd[1]: Started cri-containerd-026fd361ead4f8872d7098fb4d52b2a4db68a038c9db5cdb07701529f0bbf1db.scope - libcontainer container 026fd361ead4f8872d7098fb4d52b2a4db68a038c9db5cdb07701529f0bbf1db. Dec 13 13:29:07.080042 containerd[1484]: time="2024-12-13T13:29:07.079991463Z" level=info msg="StartContainer for \"8871ce5483a9ba53af69e8daa323b45c720107df01699a6ac8bb8ddf2b811401\" returns successfully" Dec 13 13:29:07.080115 containerd[1484]: time="2024-12-13T13:29:07.080010019Z" level=info msg="StartContainer for \"47d2b87dc8857087053b1e21e9bb2efac4afa819db0de7cbea1b34bcb092016b\" returns successfully" Dec 13 13:29:07.119740 containerd[1484]: time="2024-12-13T13:29:07.119686036Z" level=info msg="StartContainer for \"026fd361ead4f8872d7098fb4d52b2a4db68a038c9db5cdb07701529f0bbf1db\" returns successfully" Dec 13 13:29:07.673400 kubelet[2318]: E1213 13:29:07.673174 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:07.676486 kubelet[2318]: E1213 13:29:07.676215 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:07.676955 kubelet[2318]: E1213 13:29:07.676942 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:07.757538 kubelet[2318]: I1213 13:29:07.757494 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:08.681616 kubelet[2318]: E1213 13:29:08.681574 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:29:08.682651 kubelet[2318]: E1213 13:29:08.682620 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:08.818716 kubelet[2318]: I1213 13:29:08.818643 2318 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:29:08.827795 kubelet[2318]: E1213 13:29:08.827757 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:29:08.928329 kubelet[2318]: E1213 13:29:08.928286 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:29:09.029057 kubelet[2318]: E1213 13:29:09.028929 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:29:09.129192 kubelet[2318]: E1213 13:29:09.129149 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:29:09.640353 kubelet[2318]: I1213 13:29:09.640318 2318 apiserver.go:52] "Watching apiserver" Dec 13 13:29:09.647633 kubelet[2318]: I1213 13:29:09.647608 2318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:29:09.693179 kubelet[2318]: E1213 13:29:09.693137 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:10.682003 kubelet[2318]: E1213 13:29:10.681955 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:11.089619 kubelet[2318]: E1213 13:29:11.089501 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:11.624751 systemd[1]: Reloading requested from client PID 2596 ('systemctl') (unit session-9.scope)... Dec 13 13:29:11.624765 systemd[1]: Reloading... Dec 13 13:29:11.682906 kubelet[2318]: E1213 13:29:11.682871 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:11.707787 zram_generator::config[2638]: No configuration found. Dec 13 13:29:11.810927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:11.899507 systemd[1]: Reloading finished in 274 ms. Dec 13 13:29:11.947045 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:11.963969 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:29:11.964240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:11.976954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:12.129673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:12.134671 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:29:12.467125 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:29:12.467125 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:29:12.467125 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:29:12.467520 kubelet[2680]: I1213 13:29:12.467192 2680 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:29:12.472215 kubelet[2680]: I1213 13:29:12.472172 2680 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:29:12.472215 kubelet[2680]: I1213 13:29:12.472195 2680 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:29:12.472385 kubelet[2680]: I1213 13:29:12.472377 2680 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:29:12.473962 kubelet[2680]: I1213 13:29:12.473933 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:29:12.475155 kubelet[2680]: I1213 13:29:12.475101 2680 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:29:12.485110 kubelet[2680]: I1213 13:29:12.485062 2680 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:29:12.485360 kubelet[2680]: I1213 13:29:12.485277 2680 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:29:12.485496 kubelet[2680]: I1213 13:29:12.485314 2680 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:29:12.485496 kubelet[2680]: I1213 13:29:12.485496 2680 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:29:12.485650 kubelet[2680]: I1213 13:29:12.485506 2680 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:29:12.485650 kubelet[2680]: I1213 13:29:12.485556 2680 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:29:12.485744 kubelet[2680]: I1213 13:29:12.485652 2680 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:29:12.485744 kubelet[2680]: I1213 13:29:12.485665 2680 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:29:12.485744 kubelet[2680]: I1213 13:29:12.485686 2680 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:29:12.485744 kubelet[2680]: I1213 13:29:12.485717 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.489065 2680 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.489332 2680 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.489896 2680 server.go:1264] "Started kubelet" Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.490379 2680 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.490444 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.490791 2680 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:29:12.491753 kubelet[2680]: I1213 13:29:12.491208 2680 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:29:12.492785 kubelet[2680]: I1213 13:29:12.492771 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:29:12.493871 kubelet[2680]: E1213 13:29:12.493842 2680 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:29:12.496178 kubelet[2680]: I1213 13:29:12.496124 2680 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:29:12.496275 kubelet[2680]: I1213 13:29:12.496207 2680 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:29:12.496354 kubelet[2680]: I1213 13:29:12.496343 2680 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:29:12.500608 kubelet[2680]: I1213 13:29:12.500556 2680 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:29:12.500608 kubelet[2680]: I1213 13:29:12.500595 2680 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:29:12.501037 kubelet[2680]: I1213 13:29:12.500998 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:29:12.508048 kubelet[2680]: I1213 13:29:12.507993 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:29:12.509185 kubelet[2680]: I1213 13:29:12.509170 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:29:12.509233 kubelet[2680]: I1213 13:29:12.509198 2680 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:29:12.509233 kubelet[2680]: I1213 13:29:12.509214 2680 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:29:12.509286 kubelet[2680]: E1213 13:29:12.509252 2680 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:29:12.536988 kubelet[2680]: I1213 13:29:12.536955 2680 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:29:12.536988 kubelet[2680]: I1213 13:29:12.536979 2680 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:29:12.537124 kubelet[2680]: I1213 13:29:12.537001 2680 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:29:12.537191 kubelet[2680]: I1213 13:29:12.537175 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:29:12.537217 kubelet[2680]: I1213 13:29:12.537192 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:29:12.537217 kubelet[2680]: I1213 13:29:12.537212 2680 policy_none.go:49] "None policy: Start" Dec 13 13:29:12.537715 kubelet[2680]: I1213 13:29:12.537685 2680 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:29:12.537759 kubelet[2680]: I1213 13:29:12.537754 2680 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:29:12.537910 kubelet[2680]: I1213 13:29:12.537890 2680 state_mem.go:75] "Updated machine memory state" Dec 13 13:29:12.542280 kubelet[2680]: I1213 13:29:12.542250 2680 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:29:12.542806 kubelet[2680]: I1213 13:29:12.542460 2680 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:29:12.542806 kubelet[2680]: I1213 13:29:12.542651 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:29:12.610444 kubelet[2680]: I1213 13:29:12.610380 2680 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:29:12.610586 kubelet[2680]: I1213 13:29:12.610503 2680 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:29:12.610586 kubelet[2680]: I1213 13:29:12.610563 2680 topology_manager.go:215] "Topology Admit Handler" podUID="9d2e9130dfba75db5131d7db10b63e50" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:29:12.615996 kubelet[2680]: E1213 13:29:12.615913 2680 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:29:12.616394 kubelet[2680]: E1213 13:29:12.616362 2680 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:12.626464 sudo[2717]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:29:12.626845 sudo[2717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:29:12.650486 kubelet[2680]: I1213 13:29:12.650465 2680 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:29:12.658217 kubelet[2680]: I1213 13:29:12.657755 2680 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:29:12.658217 kubelet[2680]: I1213 13:29:12.657816 2680 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:29:12.797363 kubelet[2680]: I1213 13:29:12.797254 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:12.797363 kubelet[2680]: I1213 13:29:12.797287 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:12.797363 kubelet[2680]: I1213 13:29:12.797304 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:12.797363 kubelet[2680]: I1213 13:29:12.797320 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:12.797363 kubelet[2680]: I1213 13:29:12.797337 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:12.797594 kubelet[2680]: I1213 13:29:12.797353 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:29:12.797594 kubelet[2680]: I1213 13:29:12.797483 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:29:12.797594 kubelet[2680]: I1213 13:29:12.797543 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:12.797594 kubelet[2680]: I1213 13:29:12.797577 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d2e9130dfba75db5131d7db10b63e50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d2e9130dfba75db5131d7db10b63e50\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:12.917166 kubelet[2680]: E1213 13:29:12.917134 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:12.918393 kubelet[2680]: E1213 13:29:12.917591 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:12.918393 kubelet[2680]: E1213 13:29:12.917645 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:13.081008 sudo[2717]: pam_unix(sudo:session): session closed for user root Dec 13 13:29:13.487118 kubelet[2680]: I1213 13:29:13.487088 2680 apiserver.go:52] "Watching apiserver" Dec 13 13:29:13.496524 kubelet[2680]: I1213 13:29:13.496509 2680 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:29:13.521784 kubelet[2680]: E1213 13:29:13.521380 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:13.522437 kubelet[2680]: E1213 13:29:13.521987 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:13.526981 kubelet[2680]: E1213 13:29:13.526952 2680 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:29:13.527358 kubelet[2680]: E1213 13:29:13.527327 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:13.542725 kubelet[2680]: I1213 13:29:13.542537 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.542519725 podStartE2EDuration="2.542519725s" podCreationTimestamp="2024-12-13 13:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:13.536831181 +0000 UTC m=+1.397988754" watchObservedRunningTime="2024-12-13 13:29:13.542519725 +0000 UTC m=+1.403677297" Dec 13 13:29:13.656065 kubelet[2680]: I1213 13:29:13.655646 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.655627865 podStartE2EDuration="1.655627865s" podCreationTimestamp="2024-12-13 13:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:13.542648332 +0000 UTC m=+1.403805904" watchObservedRunningTime="2024-12-13 13:29:13.655627865 +0000 UTC m=+1.516785437" Dec 13 13:29:13.704876 kubelet[2680]: I1213 13:29:13.704814 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.704794225 podStartE2EDuration="4.704794225s" podCreationTimestamp="2024-12-13 13:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:13.656547859 +0000 UTC m=+1.517705431" watchObservedRunningTime="2024-12-13 13:29:13.704794225 +0000 UTC m=+1.565951797" Dec 13 13:29:14.522210 kubelet[2680]: E1213 13:29:14.522172 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:14.541908 sudo[1680]: pam_unix(sudo:session): session closed for user root Dec 13 13:29:14.543471 sshd[1679]: Connection closed by 10.0.0.1 port 57986 Dec 13 13:29:14.543887 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:14.547640 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:57986.service: Deactivated successfully. Dec 13 13:29:14.549566 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:29:14.549870 systemd[1]: session-9.scope: Consumed 4.949s CPU time, 190.9M memory peak, 0B memory swap peak. Dec 13 13:29:14.550366 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:29:14.551382 systemd-logind[1471]: Removed session 9. Dec 13 13:29:16.468159 kubelet[2680]: E1213 13:29:16.468124 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:16.999331 kubelet[2680]: E1213 13:29:16.999286 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:17.641976 update_engine[1473]: I20241213 13:29:17.641866 1473 update_attempter.cc:509] Updating boot flags... Dec 13 13:29:17.672793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2767) Dec 13 13:29:17.713751 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2768) Dec 13 13:29:17.743765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2768) Dec 13 13:29:22.225946 kubelet[2680]: E1213 13:29:22.225901 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:22.532011 kubelet[2680]: E1213 13:29:22.531912 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:24.408006 kubelet[2680]: I1213 13:29:24.407974 2680 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:29:24.408395 containerd[1484]: time="2024-12-13T13:29:24.408263400Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:29:24.408635 kubelet[2680]: I1213 13:29:24.408403 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:29:24.775515 kubelet[2680]: I1213 13:29:24.774931 2680 topology_manager.go:215] "Topology Admit Handler" podUID="4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c" podNamespace="kube-system" podName="kube-proxy-v57lp" Dec 13 13:29:24.784765 kubelet[2680]: I1213 13:29:24.784316 2680 topology_manager.go:215] "Topology Admit Handler" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" podNamespace="kube-system" podName="cilium-zqjpz" Dec 13 13:29:24.786059 systemd[1]: Created slice kubepods-besteffort-pod4d185994_8b0b_4fd1_ad21_ef4bc1b5f96c.slice - libcontainer container kubepods-besteffort-pod4d185994_8b0b_4fd1_ad21_ef4bc1b5f96c.slice. Dec 13 13:29:24.800449 systemd[1]: Created slice kubepods-burstable-pod7b4e200d_1707_49a7_a8de_c2dda3718c20.slice - libcontainer container kubepods-burstable-pod7b4e200d_1707_49a7_a8de_c2dda3718c20.slice. Dec 13 13:29:24.872428 kubelet[2680]: I1213 13:29:24.872379 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-kube-proxy\") pod \"kube-proxy-v57lp\" (UID: \"4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c\") " pod="kube-system/kube-proxy-v57lp" Dec 13 13:29:24.872428 kubelet[2680]: I1213 13:29:24.872422 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-cgroup\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872428 kubelet[2680]: I1213 13:29:24.872436 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-xtables-lock\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872428 kubelet[2680]: I1213 13:29:24.872450 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4e200d-1707-49a7-a8de-c2dda3718c20-clustermesh-secrets\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872464 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-xtables-lock\") pod \"kube-proxy-v57lp\" (UID: \"4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c\") " pod="kube-system/kube-proxy-v57lp" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872477 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-lib-modules\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872497 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hxjt\" (UniqueName: \"kubernetes.io/projected/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-kube-api-access-7hxjt\") pod \"kube-proxy-v57lp\" (UID: \"4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c\") " pod="kube-system/kube-proxy-v57lp" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872512 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cni-path\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872524 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-config-path\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872652 kubelet[2680]: I1213 13:29:24.872537 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-bpf-maps\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872550 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-net\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872565 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g895m\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872577 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-lib-modules\") pod \"kube-proxy-v57lp\" (UID: \"4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c\") " pod="kube-system/kube-proxy-v57lp" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872592 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-run\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872605 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-hostproc\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872832 kubelet[2680]: I1213 13:29:24.872619 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-hubble-tls\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872965 kubelet[2680]: I1213 13:29:24.872633 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-etc-cni-netd\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.872965 kubelet[2680]: I1213 13:29:24.872648 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-kernel\") pod \"cilium-zqjpz\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " pod="kube-system/cilium-zqjpz" Dec 13 13:29:24.980074 kubelet[2680]: E1213 13:29:24.980022 2680 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:29:24.980074 kubelet[2680]: E1213 13:29:24.980072 2680 projected.go:200] Error preparing data for projected volume kube-api-access-7hxjt for pod kube-system/kube-proxy-v57lp: configmap "kube-root-ca.crt" not found Dec 13 13:29:24.980241 kubelet[2680]: E1213 13:29:24.980120 2680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-kube-api-access-7hxjt podName:4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c nodeName:}" failed. No retries permitted until 2024-12-13 13:29:25.48010133 +0000 UTC m=+13.341258902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7hxjt" (UniqueName: "kubernetes.io/projected/4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c-kube-api-access-7hxjt") pod "kube-proxy-v57lp" (UID: "4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c") : configmap "kube-root-ca.crt" not found Dec 13 13:29:24.980241 kubelet[2680]: E1213 13:29:24.980023 2680 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:29:24.980241 kubelet[2680]: E1213 13:29:24.980181 2680 projected.go:200] Error preparing data for projected volume kube-api-access-g895m for pod kube-system/cilium-zqjpz: configmap "kube-root-ca.crt" not found Dec 13 13:29:24.980241 kubelet[2680]: E1213 13:29:24.980226 2680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m podName:7b4e200d-1707-49a7-a8de-c2dda3718c20 nodeName:}" failed. No retries permitted until 2024-12-13 13:29:25.480215718 +0000 UTC m=+13.341373290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g895m" (UniqueName: "kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m") pod "cilium-zqjpz" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20") : configmap "kube-root-ca.crt" not found Dec 13 13:29:25.427138 kubelet[2680]: I1213 13:29:25.427062 2680 topology_manager.go:215] "Topology Admit Handler" podUID="65dcdb5e-1c91-43a5-9edd-304ce97096b8" podNamespace="kube-system" podName="cilium-operator-599987898-6s2nf" Dec 13 13:29:25.436785 systemd[1]: Created slice kubepods-besteffort-pod65dcdb5e_1c91_43a5_9edd_304ce97096b8.slice - libcontainer container kubepods-besteffort-pod65dcdb5e_1c91_43a5_9edd_304ce97096b8.slice. Dec 13 13:29:25.475629 kubelet[2680]: I1213 13:29:25.475567 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfzt\" (UniqueName: \"kubernetes.io/projected/65dcdb5e-1c91-43a5-9edd-304ce97096b8-kube-api-access-xhfzt\") pod \"cilium-operator-599987898-6s2nf\" (UID: \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\") " pod="kube-system/cilium-operator-599987898-6s2nf" Dec 13 13:29:25.475819 kubelet[2680]: I1213 13:29:25.475649 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dcdb5e-1c91-43a5-9edd-304ce97096b8-cilium-config-path\") pod \"cilium-operator-599987898-6s2nf\" (UID: \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\") " pod="kube-system/cilium-operator-599987898-6s2nf" Dec 13 13:29:25.696746 kubelet[2680]: E1213 13:29:25.696586 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.697366 containerd[1484]: time="2024-12-13T13:29:25.697308251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v57lp,Uid:4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:25.705038 kubelet[2680]: E1213 13:29:25.705007 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.705400 containerd[1484]: time="2024-12-13T13:29:25.705368289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqjpz,Uid:7b4e200d-1707-49a7-a8de-c2dda3718c20,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:25.724651 containerd[1484]: time="2024-12-13T13:29:25.724549326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:25.724651 containerd[1484]: time="2024-12-13T13:29:25.724628205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:25.724826 containerd[1484]: time="2024-12-13T13:29:25.724652161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.725472 containerd[1484]: time="2024-12-13T13:29:25.725328393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.728273 containerd[1484]: time="2024-12-13T13:29:25.728191400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:25.728273 containerd[1484]: time="2024-12-13T13:29:25.728235974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:25.728273 containerd[1484]: time="2024-12-13T13:29:25.728246865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.728403 containerd[1484]: time="2024-12-13T13:29:25.728318200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.740106 kubelet[2680]: E1213 13:29:25.739236 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.740991 containerd[1484]: time="2024-12-13T13:29:25.740956415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6s2nf,Uid:65dcdb5e-1c91-43a5-9edd-304ce97096b8,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:25.746909 systemd[1]: Started cri-containerd-9541b4afba5477ac7742df9bd41496abaf9605f861fefeaf7126d1d7c9f32d5d.scope - libcontainer container 9541b4afba5477ac7742df9bd41496abaf9605f861fefeaf7126d1d7c9f32d5d. Dec 13 13:29:25.750809 systemd[1]: Started cri-containerd-7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d.scope - libcontainer container 7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d. Dec 13 13:29:25.775070 containerd[1484]: time="2024-12-13T13:29:25.774917438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:25.775070 containerd[1484]: time="2024-12-13T13:29:25.774983462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:25.775070 containerd[1484]: time="2024-12-13T13:29:25.775002970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.775274 containerd[1484]: time="2024-12-13T13:29:25.775092159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:25.781227 containerd[1484]: time="2024-12-13T13:29:25.781068427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v57lp,Uid:4d185994-8b0b-4fd1-ad21-ef4bc1b5f96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9541b4afba5477ac7742df9bd41496abaf9605f861fefeaf7126d1d7c9f32d5d\"" Dec 13 13:29:25.782209 kubelet[2680]: E1213 13:29:25.782184 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.786530 containerd[1484]: time="2024-12-13T13:29:25.786476808Z" level=info msg="CreateContainer within sandbox \"9541b4afba5477ac7742df9bd41496abaf9605f861fefeaf7126d1d7c9f32d5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:29:25.792328 containerd[1484]: time="2024-12-13T13:29:25.792154360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqjpz,Uid:7b4e200d-1707-49a7-a8de-c2dda3718c20,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\"" Dec 13 13:29:25.793741 kubelet[2680]: E1213 13:29:25.793687 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.795239 containerd[1484]: time="2024-12-13T13:29:25.795196967Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:29:25.805980 systemd[1]: Started cri-containerd-c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3.scope - libcontainer container c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3. Dec 13 13:29:25.813776 containerd[1484]: time="2024-12-13T13:29:25.813682406Z" level=info msg="CreateContainer within sandbox \"9541b4afba5477ac7742df9bd41496abaf9605f861fefeaf7126d1d7c9f32d5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b1a4a9d952b95f583dc930d769660fd19531101a875243269f44f60ded7c232\"" Dec 13 13:29:25.814394 containerd[1484]: time="2024-12-13T13:29:25.814361513Z" level=info msg="StartContainer for \"2b1a4a9d952b95f583dc930d769660fd19531101a875243269f44f60ded7c232\"" Dec 13 13:29:25.844944 systemd[1]: Started cri-containerd-2b1a4a9d952b95f583dc930d769660fd19531101a875243269f44f60ded7c232.scope - libcontainer container 2b1a4a9d952b95f583dc930d769660fd19531101a875243269f44f60ded7c232. Dec 13 13:29:25.849186 containerd[1484]: time="2024-12-13T13:29:25.849138512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6s2nf,Uid:65dcdb5e-1c91-43a5-9edd-304ce97096b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\"" Dec 13 13:29:25.849909 kubelet[2680]: E1213 13:29:25.849879 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:25.883940 containerd[1484]: time="2024-12-13T13:29:25.883279326Z" level=info msg="StartContainer for \"2b1a4a9d952b95f583dc930d769660fd19531101a875243269f44f60ded7c232\" returns successfully" Dec 13 13:29:26.472387 kubelet[2680]: E1213 13:29:26.472347 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:26.538919 kubelet[2680]: E1213 13:29:26.538890 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:27.005113 kubelet[2680]: E1213 13:29:27.005085 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:27.044145 kubelet[2680]: I1213 13:29:27.043876 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v57lp" podStartSLOduration=3.043701096 podStartE2EDuration="3.043701096s" podCreationTimestamp="2024-12-13 13:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:26.547166918 +0000 UTC m=+14.408324490" watchObservedRunningTime="2024-12-13 13:29:27.043701096 +0000 UTC m=+14.904858668" Dec 13 13:29:34.546739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500962576.mount: Deactivated successfully. Dec 13 13:29:37.285618 containerd[1484]: time="2024-12-13T13:29:37.285569152Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:37.286512 containerd[1484]: time="2024-12-13T13:29:37.286255576Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734767" Dec 13 13:29:37.287458 containerd[1484]: time="2024-12-13T13:29:37.287398901Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:37.289090 containerd[1484]: time="2024-12-13T13:29:37.289060072Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.493819883s" Dec 13 13:29:37.289164 containerd[1484]: time="2024-12-13T13:29:37.289091180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:29:37.290044 containerd[1484]: time="2024-12-13T13:29:37.290017146Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:29:37.293748 containerd[1484]: time="2024-12-13T13:29:37.293720506Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:29:37.308725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063877337.mount: Deactivated successfully. Dec 13 13:29:37.311211 containerd[1484]: time="2024-12-13T13:29:37.311171077Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\"" Dec 13 13:29:37.311693 containerd[1484]: time="2024-12-13T13:29:37.311650661Z" level=info msg="StartContainer for \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\"" Dec 13 13:29:37.337830 systemd[1]: Started cri-containerd-b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403.scope - libcontainer container b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403. Dec 13 13:29:37.360814 containerd[1484]: time="2024-12-13T13:29:37.360775783Z" level=info msg="StartContainer for \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\" returns successfully" Dec 13 13:29:37.371922 systemd[1]: cri-containerd-b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403.scope: Deactivated successfully. Dec 13 13:29:37.798858 containerd[1484]: time="2024-12-13T13:29:37.798798880Z" level=info msg="shim disconnected" id=b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403 namespace=k8s.io Dec 13 13:29:37.798858 containerd[1484]: time="2024-12-13T13:29:37.798852401Z" level=warning msg="cleaning up after shim disconnected" id=b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403 namespace=k8s.io Dec 13 13:29:37.798858 containerd[1484]: time="2024-12-13T13:29:37.798862571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:38.287920 kubelet[2680]: E1213 13:29:38.287884 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:38.290517 containerd[1484]: time="2024-12-13T13:29:38.290433331Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:29:38.305267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403-rootfs.mount: Deactivated successfully. Dec 13 13:29:38.306475 containerd[1484]: time="2024-12-13T13:29:38.306437166Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\"" Dec 13 13:29:38.307036 containerd[1484]: time="2024-12-13T13:29:38.306996711Z" level=info msg="StartContainer for \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\"" Dec 13 13:29:38.336855 systemd[1]: Started cri-containerd-66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52.scope - libcontainer container 66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52. Dec 13 13:29:38.363783 containerd[1484]: time="2024-12-13T13:29:38.363739198Z" level=info msg="StartContainer for \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\" returns successfully" Dec 13 13:29:38.375790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:29:38.376400 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:38.376471 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:38.382020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:38.382259 systemd[1]: cri-containerd-66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52.scope: Deactivated successfully. Dec 13 13:29:38.395482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52-rootfs.mount: Deactivated successfully. Dec 13 13:29:38.399479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:38.616043 containerd[1484]: time="2024-12-13T13:29:38.615896467Z" level=info msg="shim disconnected" id=66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52 namespace=k8s.io Dec 13 13:29:38.616043 containerd[1484]: time="2024-12-13T13:29:38.615947023Z" level=warning msg="cleaning up after shim disconnected" id=66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52 namespace=k8s.io Dec 13 13:29:38.616043 containerd[1484]: time="2024-12-13T13:29:38.615958364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:39.291049 kubelet[2680]: E1213 13:29:39.290744 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:39.292684 containerd[1484]: time="2024-12-13T13:29:39.292607299Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:29:39.305453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884995866.mount: Deactivated successfully. Dec 13 13:29:39.318905 containerd[1484]: time="2024-12-13T13:29:39.318853269Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\"" Dec 13 13:29:39.319373 containerd[1484]: time="2024-12-13T13:29:39.319330198Z" level=info msg="StartContainer for \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\"" Dec 13 13:29:39.352844 systemd[1]: Started cri-containerd-65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e.scope - libcontainer container 65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e. Dec 13 13:29:39.383802 containerd[1484]: time="2024-12-13T13:29:39.383240638Z" level=info msg="StartContainer for \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\" returns successfully" Dec 13 13:29:39.384141 systemd[1]: cri-containerd-65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e.scope: Deactivated successfully. Dec 13 13:29:39.403751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e-rootfs.mount: Deactivated successfully. Dec 13 13:29:39.462461 containerd[1484]: time="2024-12-13T13:29:39.462151343Z" level=info msg="shim disconnected" id=65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e namespace=k8s.io Dec 13 13:29:39.462461 containerd[1484]: time="2024-12-13T13:29:39.462230823Z" level=warning msg="cleaning up after shim disconnected" id=65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e namespace=k8s.io Dec 13 13:29:39.462461 containerd[1484]: time="2024-12-13T13:29:39.462243426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:39.776351 containerd[1484]: time="2024-12-13T13:29:39.776293971Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:39.776999 containerd[1484]: time="2024-12-13T13:29:39.776958112Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Dec 13 13:29:39.778035 containerd[1484]: time="2024-12-13T13:29:39.778006417Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:39.779420 containerd[1484]: time="2024-12-13T13:29:39.779385896Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.489338603s" Dec 13 13:29:39.779420 containerd[1484]: time="2024-12-13T13:29:39.779414099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:29:39.781408 containerd[1484]: time="2024-12-13T13:29:39.781379061Z" level=info msg="CreateContainer within sandbox \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:29:39.794727 containerd[1484]: time="2024-12-13T13:29:39.794670968Z" level=info msg="CreateContainer within sandbox \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\"" Dec 13 13:29:39.795214 containerd[1484]: time="2024-12-13T13:29:39.795171080Z" level=info msg="StartContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\"" Dec 13 13:29:39.823919 systemd[1]: Started cri-containerd-bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce.scope - libcontainer container bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce. Dec 13 13:29:39.849881 containerd[1484]: time="2024-12-13T13:29:39.849814544Z" level=info msg="StartContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" returns successfully" Dec 13 13:29:40.296874 kubelet[2680]: E1213 13:29:40.296485 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:40.300457 kubelet[2680]: E1213 13:29:40.300049 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:40.302144 containerd[1484]: time="2024-12-13T13:29:40.302004884Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:29:40.688997 kubelet[2680]: I1213 13:29:40.688940 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6s2nf" podStartSLOduration=1.759623329 podStartE2EDuration="15.688917166s" podCreationTimestamp="2024-12-13 13:29:25 +0000 UTC" firstStartedPulling="2024-12-13 13:29:25.850875284 +0000 UTC m=+13.712032856" lastFinishedPulling="2024-12-13 13:29:39.780169121 +0000 UTC m=+27.641326693" observedRunningTime="2024-12-13 13:29:40.687277668 +0000 UTC m=+28.548435240" watchObservedRunningTime="2024-12-13 13:29:40.688917166 +0000 UTC m=+28.550074828" Dec 13 13:29:40.709980 containerd[1484]: time="2024-12-13T13:29:40.709751431Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\"" Dec 13 13:29:40.713952 containerd[1484]: time="2024-12-13T13:29:40.710251153Z" level=info msg="StartContainer for \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\"" Dec 13 13:29:40.722091 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:59756.service - OpenSSH per-connection server daemon (10.0.0.1:59756). Dec 13 13:29:40.767825 systemd[1]: Started cri-containerd-f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a.scope - libcontainer container f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a. Dec 13 13:29:40.984585 systemd[1]: cri-containerd-f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a.scope: Deactivated successfully. Dec 13 13:29:41.024616 containerd[1484]: time="2024-12-13T13:29:41.024560536Z" level=info msg="StartContainer for \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\" returns successfully" Dec 13 13:29:41.045982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a-rootfs.mount: Deactivated successfully. Dec 13 13:29:41.093547 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 59756 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:41.095509 sshd-session[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:41.098609 containerd[1484]: time="2024-12-13T13:29:41.098527869Z" level=info msg="shim disconnected" id=f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a namespace=k8s.io Dec 13 13:29:41.098609 containerd[1484]: time="2024-12-13T13:29:41.098605955Z" level=warning msg="cleaning up after shim disconnected" id=f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a namespace=k8s.io Dec 13 13:29:41.098791 containerd[1484]: time="2024-12-13T13:29:41.098617306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:29:41.102824 systemd-logind[1471]: New session 10 of user core. Dec 13 13:29:41.108939 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:29:41.272571 sshd[3380]: Connection closed by 10.0.0.1 port 59756 Dec 13 13:29:41.272866 sshd-session[3336]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:41.277245 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:59756.service: Deactivated successfully. Dec 13 13:29:41.279922 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:29:41.280524 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:29:41.281388 systemd-logind[1471]: Removed session 10. Dec 13 13:29:41.304260 kubelet[2680]: E1213 13:29:41.304221 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:41.304783 kubelet[2680]: E1213 13:29:41.304360 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:41.306680 containerd[1484]: time="2024-12-13T13:29:41.306643939Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:29:41.333609 containerd[1484]: time="2024-12-13T13:29:41.333559735Z" level=info msg="CreateContainer within sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\"" Dec 13 13:29:41.334090 containerd[1484]: time="2024-12-13T13:29:41.334062672Z" level=info msg="StartContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\"" Dec 13 13:29:41.358495 systemd[1]: run-containerd-runc-k8s.io-be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0-runc.7r0fIc.mount: Deactivated successfully. Dec 13 13:29:41.367932 systemd[1]: Started cri-containerd-be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0.scope - libcontainer container be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0. Dec 13 13:29:41.418781 containerd[1484]: time="2024-12-13T13:29:41.418735330Z" level=info msg="StartContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" returns successfully" Dec 13 13:29:41.598243 kubelet[2680]: I1213 13:29:41.598123 2680 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:29:41.619994 kubelet[2680]: I1213 13:29:41.619556 2680 topology_manager.go:215] "Topology Admit Handler" podUID="bc4317ac-6578-4612-86cc-149bcb35a6df" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nq2pc" Dec 13 13:29:41.621739 kubelet[2680]: I1213 13:29:41.621356 2680 topology_manager.go:215] "Topology Admit Handler" podUID="fc5f7a82-a87e-432c-93cd-b60b2f203c56" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9kr4h" Dec 13 13:29:41.632369 systemd[1]: Created slice kubepods-burstable-podbc4317ac_6578_4612_86cc_149bcb35a6df.slice - libcontainer container kubepods-burstable-podbc4317ac_6578_4612_86cc_149bcb35a6df.slice. Dec 13 13:29:41.641276 systemd[1]: Created slice kubepods-burstable-podfc5f7a82_a87e_432c_93cd_b60b2f203c56.slice - libcontainer container kubepods-burstable-podfc5f7a82_a87e_432c_93cd_b60b2f203c56.slice. Dec 13 13:29:41.702816 kubelet[2680]: I1213 13:29:41.702759 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc5f7a82-a87e-432c-93cd-b60b2f203c56-config-volume\") pod \"coredns-7db6d8ff4d-9kr4h\" (UID: \"fc5f7a82-a87e-432c-93cd-b60b2f203c56\") " pod="kube-system/coredns-7db6d8ff4d-9kr4h" Dec 13 13:29:41.702816 kubelet[2680]: I1213 13:29:41.702816 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s446m\" (UniqueName: \"kubernetes.io/projected/fc5f7a82-a87e-432c-93cd-b60b2f203c56-kube-api-access-s446m\") pod \"coredns-7db6d8ff4d-9kr4h\" (UID: \"fc5f7a82-a87e-432c-93cd-b60b2f203c56\") " pod="kube-system/coredns-7db6d8ff4d-9kr4h" Dec 13 13:29:41.703004 kubelet[2680]: I1213 13:29:41.702843 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc4317ac-6578-4612-86cc-149bcb35a6df-config-volume\") pod \"coredns-7db6d8ff4d-nq2pc\" (UID: \"bc4317ac-6578-4612-86cc-149bcb35a6df\") " pod="kube-system/coredns-7db6d8ff4d-nq2pc" Dec 13 13:29:41.703004 kubelet[2680]: I1213 13:29:41.702863 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qbt\" (UniqueName: \"kubernetes.io/projected/bc4317ac-6578-4612-86cc-149bcb35a6df-kube-api-access-k4qbt\") pod \"coredns-7db6d8ff4d-nq2pc\" (UID: \"bc4317ac-6578-4612-86cc-149bcb35a6df\") " pod="kube-system/coredns-7db6d8ff4d-nq2pc" Dec 13 13:29:41.939024 kubelet[2680]: E1213 13:29:41.938985 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:41.939818 containerd[1484]: time="2024-12-13T13:29:41.939759439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nq2pc,Uid:bc4317ac-6578-4612-86cc-149bcb35a6df,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:41.944115 kubelet[2680]: E1213 13:29:41.944087 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:41.954821 containerd[1484]: time="2024-12-13T13:29:41.954762958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9kr4h,Uid:fc5f7a82-a87e-432c-93cd-b60b2f203c56,Namespace:kube-system,Attempt:0,}" Dec 13 13:29:42.311062 kubelet[2680]: E1213 13:29:42.310887 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:42.323804 kubelet[2680]: I1213 13:29:42.323743 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zqjpz" podStartSLOduration=6.828593369 podStartE2EDuration="18.323721733s" podCreationTimestamp="2024-12-13 13:29:24 +0000 UTC" firstStartedPulling="2024-12-13 13:29:25.794770438 +0000 UTC m=+13.655928011" lastFinishedPulling="2024-12-13 13:29:37.289898793 +0000 UTC m=+25.151056375" observedRunningTime="2024-12-13 13:29:42.32255081 +0000 UTC m=+30.183708382" watchObservedRunningTime="2024-12-13 13:29:42.323721733 +0000 UTC m=+30.184879305" Dec 13 13:29:43.311576 kubelet[2680]: E1213 13:29:43.311545 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:43.666497 systemd-networkd[1427]: cilium_host: Link UP Dec 13 13:29:43.666657 systemd-networkd[1427]: cilium_net: Link UP Dec 13 13:29:43.666859 systemd-networkd[1427]: cilium_net: Gained carrier Dec 13 13:29:43.667030 systemd-networkd[1427]: cilium_host: Gained carrier Dec 13 13:29:43.777918 systemd-networkd[1427]: cilium_vxlan: Link UP Dec 13 13:29:43.777931 systemd-networkd[1427]: cilium_vxlan: Gained carrier Dec 13 13:29:43.791920 systemd-networkd[1427]: cilium_net: Gained IPv6LL Dec 13 13:29:43.814892 systemd-networkd[1427]: cilium_host: Gained IPv6LL Dec 13 13:29:43.996741 kernel: NET: Registered PF_ALG protocol family Dec 13 13:29:44.313083 kubelet[2680]: E1213 13:29:44.313054 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:44.629589 systemd-networkd[1427]: lxc_health: Link UP Dec 13 13:29:44.639318 systemd-networkd[1427]: lxc_health: Gained carrier Dec 13 13:29:44.989831 systemd-networkd[1427]: lxcac13724cec3f: Link UP Dec 13 13:29:44.998808 kernel: eth0: renamed from tmpdcc52 Dec 13 13:29:45.002519 systemd-networkd[1427]: lxcac13724cec3f: Gained carrier Dec 13 13:29:45.010651 systemd-networkd[1427]: lxc7c70f93a10ef: Link UP Dec 13 13:29:45.020735 kernel: eth0: renamed from tmp8c80b Dec 13 13:29:45.028039 systemd-networkd[1427]: lxc7c70f93a10ef: Gained carrier Dec 13 13:29:45.039817 systemd-networkd[1427]: cilium_vxlan: Gained IPv6LL Dec 13 13:29:45.809495 systemd-networkd[1427]: lxc_health: Gained IPv6LL Dec 13 13:29:45.812883 kubelet[2680]: E1213 13:29:45.812651 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:46.288082 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:55120.service - OpenSSH per-connection server daemon (10.0.0.1:55120). Dec 13 13:29:46.317497 kubelet[2680]: E1213 13:29:46.316979 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:46.343730 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 55120 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:46.344620 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:46.348692 systemd-logind[1471]: New session 11 of user core. Dec 13 13:29:46.354824 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:29:46.469106 sshd[3919]: Connection closed by 10.0.0.1 port 55120 Dec 13 13:29:46.469431 sshd-session[3917]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:46.473882 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:55120.service: Deactivated successfully. Dec 13 13:29:46.475644 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:29:46.476400 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:29:46.477345 systemd-logind[1471]: Removed session 11. Dec 13 13:29:46.576651 systemd-networkd[1427]: lxcac13724cec3f: Gained IPv6LL Dec 13 13:29:46.894844 systemd-networkd[1427]: lxc7c70f93a10ef: Gained IPv6LL Dec 13 13:29:47.319282 kubelet[2680]: E1213 13:29:47.319150 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:48.726692 containerd[1484]: time="2024-12-13T13:29:48.726498331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:48.727098 containerd[1484]: time="2024-12-13T13:29:48.726696734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:48.727098 containerd[1484]: time="2024-12-13T13:29:48.726746197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:48.727098 containerd[1484]: time="2024-12-13T13:29:48.726864019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:48.757833 systemd[1]: Started cri-containerd-8c80b9a3b176f338487b678d323007b8802282b38c5c70020ef1981b56005afc.scope - libcontainer container 8c80b9a3b176f338487b678d323007b8802282b38c5c70020ef1981b56005afc. Dec 13 13:29:48.769501 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:29:48.790734 containerd[1484]: time="2024-12-13T13:29:48.790624601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:48.790734 containerd[1484]: time="2024-12-13T13:29:48.790687289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:48.790919 containerd[1484]: time="2024-12-13T13:29:48.790745419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:48.790919 containerd[1484]: time="2024-12-13T13:29:48.790819768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:48.798050 containerd[1484]: time="2024-12-13T13:29:48.797799982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9kr4h,Uid:fc5f7a82-a87e-432c-93cd-b60b2f203c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c80b9a3b176f338487b678d323007b8802282b38c5c70020ef1981b56005afc\"" Dec 13 13:29:48.798561 kubelet[2680]: E1213 13:29:48.798538 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:48.800103 containerd[1484]: time="2024-12-13T13:29:48.800067165Z" level=info msg="CreateContainer within sandbox \"8c80b9a3b176f338487b678d323007b8802282b38c5c70020ef1981b56005afc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:29:48.808439 systemd[1]: run-containerd-runc-k8s.io-dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929-runc.Kxqf0p.mount: Deactivated successfully. Dec 13 13:29:48.818833 systemd[1]: Started cri-containerd-dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929.scope - libcontainer container dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929. Dec 13 13:29:48.831202 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:29:48.852880 containerd[1484]: time="2024-12-13T13:29:48.852837363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nq2pc,Uid:bc4317ac-6578-4612-86cc-149bcb35a6df,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929\"" Dec 13 13:29:48.853539 kubelet[2680]: E1213 13:29:48.853518 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:48.855087 containerd[1484]: time="2024-12-13T13:29:48.855035907Z" level=info msg="CreateContainer within sandbox \"dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:29:48.986602 containerd[1484]: time="2024-12-13T13:29:48.986453888Z" level=info msg="CreateContainer within sandbox \"8c80b9a3b176f338487b678d323007b8802282b38c5c70020ef1981b56005afc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46c6aea2a4da7d8af9b69a5d2767a03109032bef286ffbf2df3d47310cec064d\"" Dec 13 13:29:48.987139 containerd[1484]: time="2024-12-13T13:29:48.987108209Z" level=info msg="StartContainer for \"46c6aea2a4da7d8af9b69a5d2767a03109032bef286ffbf2df3d47310cec064d\"" Dec 13 13:29:49.015892 systemd[1]: Started cri-containerd-46c6aea2a4da7d8af9b69a5d2767a03109032bef286ffbf2df3d47310cec064d.scope - libcontainer container 46c6aea2a4da7d8af9b69a5d2767a03109032bef286ffbf2df3d47310cec064d. Dec 13 13:29:49.049880 containerd[1484]: time="2024-12-13T13:29:49.049837598Z" level=info msg="StartContainer for \"46c6aea2a4da7d8af9b69a5d2767a03109032bef286ffbf2df3d47310cec064d\" returns successfully" Dec 13 13:29:49.049987 containerd[1484]: time="2024-12-13T13:29:49.049846976Z" level=info msg="CreateContainer within sandbox \"dcc52f822022096dd0068366143c961c2850b057c4b3e9dcd7a2fa665fc42929\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71480e437d34740d39e52256b91481940bfc38487accc0b89a9abf41f39761e2\"" Dec 13 13:29:49.050345 containerd[1484]: time="2024-12-13T13:29:49.050315697Z" level=info msg="StartContainer for \"71480e437d34740d39e52256b91481940bfc38487accc0b89a9abf41f39761e2\"" Dec 13 13:29:49.091975 systemd[1]: Started cri-containerd-71480e437d34740d39e52256b91481940bfc38487accc0b89a9abf41f39761e2.scope - libcontainer container 71480e437d34740d39e52256b91481940bfc38487accc0b89a9abf41f39761e2. Dec 13 13:29:49.127050 containerd[1484]: time="2024-12-13T13:29:49.127002863Z" level=info msg="StartContainer for \"71480e437d34740d39e52256b91481940bfc38487accc0b89a9abf41f39761e2\" returns successfully" Dec 13 13:29:49.324096 kubelet[2680]: E1213 13:29:49.323318 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:49.326477 kubelet[2680]: E1213 13:29:49.326447 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:49.336194 kubelet[2680]: I1213 13:29:49.336124 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9kr4h" podStartSLOduration=24.336104356 podStartE2EDuration="24.336104356s" podCreationTimestamp="2024-12-13 13:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:49.335173205 +0000 UTC m=+37.196330777" watchObservedRunningTime="2024-12-13 13:29:49.336104356 +0000 UTC m=+37.197261928" Dec 13 13:29:49.346866 kubelet[2680]: I1213 13:29:49.346795 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nq2pc" podStartSLOduration=24.346778603 podStartE2EDuration="24.346778603s" podCreationTimestamp="2024-12-13 13:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:49.345816193 +0000 UTC m=+37.206973785" watchObservedRunningTime="2024-12-13 13:29:49.346778603 +0000 UTC m=+37.207936175" Dec 13 13:29:50.328617 kubelet[2680]: E1213 13:29:50.328574 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:50.329103 kubelet[2680]: E1213 13:29:50.328685 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:51.330470 kubelet[2680]: E1213 13:29:51.330439 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:51.330912 kubelet[2680]: E1213 13:29:51.330608 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:29:51.482969 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:55136.service - OpenSSH per-connection server daemon (10.0.0.1:55136). Dec 13 13:29:51.529069 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 55136 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:51.530372 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:51.534271 systemd-logind[1471]: New session 12 of user core. Dec 13 13:29:51.542880 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:29:51.666274 sshd[4110]: Connection closed by 10.0.0.1 port 55136 Dec 13 13:29:51.666661 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:51.671172 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:55136.service: Deactivated successfully. Dec 13 13:29:51.673301 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:29:51.674093 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:29:51.675185 systemd-logind[1471]: Removed session 12. Dec 13 13:29:56.677672 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:48226.service - OpenSSH per-connection server daemon (10.0.0.1:48226). Dec 13 13:29:56.721517 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 48226 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:29:56.723177 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:56.727495 systemd-logind[1471]: New session 13 of user core. Dec 13 13:29:56.734993 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:29:56.846361 sshd[4129]: Connection closed by 10.0.0.1 port 48226 Dec 13 13:29:56.846766 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:56.850862 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:48226.service: Deactivated successfully. Dec 13 13:29:56.853188 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:29:56.853939 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:29:56.855273 systemd-logind[1471]: Removed session 13. Dec 13 13:30:01.860015 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:48242.service - OpenSSH per-connection server daemon (10.0.0.1:48242). Dec 13 13:30:01.904938 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 48242 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:01.906233 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:01.910184 systemd-logind[1471]: New session 14 of user core. Dec 13 13:30:01.925878 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:30:02.035998 sshd[4144]: Connection closed by 10.0.0.1 port 48242 Dec 13 13:30:02.036364 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:02.051759 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:48242.service: Deactivated successfully. Dec 13 13:30:02.053671 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:30:02.055270 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:30:02.064976 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:48258.service - OpenSSH per-connection server daemon (10.0.0.1:48258). Dec 13 13:30:02.065876 systemd-logind[1471]: Removed session 14. Dec 13 13:30:02.105168 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 48258 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:02.106499 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:02.110435 systemd-logind[1471]: New session 15 of user core. Dec 13 13:30:02.118825 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:30:02.260752 sshd[4159]: Connection closed by 10.0.0.1 port 48258 Dec 13 13:30:02.261613 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:02.271066 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:48258.service: Deactivated successfully. Dec 13 13:30:02.272946 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:30:02.276758 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:30:02.287023 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:48264.service - OpenSSH per-connection server daemon (10.0.0.1:48264). Dec 13 13:30:02.287987 systemd-logind[1471]: Removed session 15. Dec 13 13:30:02.326065 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 48264 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:02.327520 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:02.332273 systemd-logind[1471]: New session 16 of user core. Dec 13 13:30:02.341800 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:30:02.449399 sshd[4172]: Connection closed by 10.0.0.1 port 48264 Dec 13 13:30:02.449775 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:02.453407 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:48264.service: Deactivated successfully. Dec 13 13:30:02.455295 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:30:02.455986 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:30:02.456880 systemd-logind[1471]: Removed session 16. Dec 13 13:30:07.461196 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:56746.service - OpenSSH per-connection server daemon (10.0.0.1:56746). Dec 13 13:30:07.506623 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 56746 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:07.508120 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:07.512355 systemd-logind[1471]: New session 17 of user core. Dec 13 13:30:07.518929 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:30:07.632139 sshd[4187]: Connection closed by 10.0.0.1 port 56746 Dec 13 13:30:07.632517 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:07.635697 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:56746.service: Deactivated successfully. Dec 13 13:30:07.637565 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:30:07.639345 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:30:07.640317 systemd-logind[1471]: Removed session 17. Dec 13 13:30:12.644014 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:56756.service - OpenSSH per-connection server daemon (10.0.0.1:56756). Dec 13 13:30:12.686276 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 56756 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:12.687610 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:12.691464 systemd-logind[1471]: New session 18 of user core. Dec 13 13:30:12.701854 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:30:12.814192 sshd[4204]: Connection closed by 10.0.0.1 port 56756 Dec 13 13:30:12.814764 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:12.828401 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:56756.service: Deactivated successfully. Dec 13 13:30:12.830361 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:30:12.831959 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:30:12.839942 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:56762.service - OpenSSH per-connection server daemon (10.0.0.1:56762). Dec 13 13:30:12.840815 systemd-logind[1471]: Removed session 18. Dec 13 13:30:12.882429 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 56762 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:12.883734 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:12.887674 systemd-logind[1471]: New session 19 of user core. Dec 13 13:30:12.894834 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:30:13.073840 sshd[4218]: Connection closed by 10.0.0.1 port 56762 Dec 13 13:30:13.074290 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:13.085713 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:56762.service: Deactivated successfully. Dec 13 13:30:13.087543 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:30:13.089214 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:30:13.090582 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:56770.service - OpenSSH per-connection server daemon (10.0.0.1:56770). Dec 13 13:30:13.091406 systemd-logind[1471]: Removed session 19. Dec 13 13:30:13.133522 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 56770 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:13.135126 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:13.139388 systemd-logind[1471]: New session 20 of user core. Dec 13 13:30:13.149845 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:30:14.463753 sshd[4230]: Connection closed by 10.0.0.1 port 56770 Dec 13 13:30:14.464213 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:14.477411 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:56770.service: Deactivated successfully. Dec 13 13:30:14.482801 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:30:14.484729 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:30:14.492031 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:56782.service - OpenSSH per-connection server daemon (10.0.0.1:56782). Dec 13 13:30:14.494013 systemd-logind[1471]: Removed session 20. Dec 13 13:30:14.535304 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 56782 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:14.536662 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:14.540561 systemd-logind[1471]: New session 21 of user core. Dec 13 13:30:14.549821 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:30:14.768385 sshd[4249]: Connection closed by 10.0.0.1 port 56782 Dec 13 13:30:14.769400 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:14.778600 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:56782.service: Deactivated successfully. Dec 13 13:30:14.780242 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:30:14.781873 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:30:14.786058 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Dec 13 13:30:14.786861 systemd-logind[1471]: Removed session 21. Dec 13 13:30:14.827051 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:14.829060 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:14.833357 systemd-logind[1471]: New session 22 of user core. Dec 13 13:30:14.842835 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:30:14.948133 sshd[4262]: Connection closed by 10.0.0.1 port 56796 Dec 13 13:30:14.948484 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:14.952511 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:56796.service: Deactivated successfully. Dec 13 13:30:14.954497 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:30:14.955176 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:30:14.956145 systemd-logind[1471]: Removed session 22. Dec 13 13:30:19.959745 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:49994.service - OpenSSH per-connection server daemon (10.0.0.1:49994). Dec 13 13:30:20.012461 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 49994 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:20.014068 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:20.018308 systemd-logind[1471]: New session 23 of user core. Dec 13 13:30:20.032000 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:30:20.141408 sshd[4277]: Connection closed by 10.0.0.1 port 49994 Dec 13 13:30:20.141768 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:20.146049 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:49994.service: Deactivated successfully. Dec 13 13:30:20.148111 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:30:20.148758 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:30:20.149612 systemd-logind[1471]: Removed session 23. Dec 13 13:30:25.157207 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:50008.service - OpenSSH per-connection server daemon (10.0.0.1:50008). Dec 13 13:30:25.223361 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 50008 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:25.225737 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:25.237851 systemd-logind[1471]: New session 24 of user core. Dec 13 13:30:25.248057 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:30:25.390038 sshd[4295]: Connection closed by 10.0.0.1 port 50008 Dec 13 13:30:25.390468 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:25.395572 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:50008.service: Deactivated successfully. Dec 13 13:30:25.398850 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:30:25.399721 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:30:25.401100 systemd-logind[1471]: Removed session 24. Dec 13 13:30:30.401578 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:59448.service - OpenSSH per-connection server daemon (10.0.0.1:59448). Dec 13 13:30:30.444034 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 59448 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:30.445441 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:30.449168 systemd-logind[1471]: New session 25 of user core. Dec 13 13:30:30.456817 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:30:30.559245 sshd[4312]: Connection closed by 10.0.0.1 port 59448 Dec 13 13:30:30.559602 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:30.562916 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:59448.service: Deactivated successfully. Dec 13 13:30:30.564762 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:30:30.565378 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:30:30.566199 systemd-logind[1471]: Removed session 25. Dec 13 13:30:35.572458 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Dec 13 13:30:35.614845 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:35.616246 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:35.619998 systemd-logind[1471]: New session 26 of user core. Dec 13 13:30:35.631833 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:30:35.735517 sshd[4327]: Connection closed by 10.0.0.1 port 59450 Dec 13 13:30:35.735996 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:35.747571 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:59450.service: Deactivated successfully. Dec 13 13:30:35.749611 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:30:35.751125 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:30:35.760957 systemd[1]: Started sshd@26-10.0.0.99:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). Dec 13 13:30:35.761793 systemd-logind[1471]: Removed session 26. Dec 13 13:30:35.799002 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:35.800252 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:35.803936 systemd-logind[1471]: New session 27 of user core. Dec 13 13:30:35.812838 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:30:37.140899 containerd[1484]: time="2024-12-13T13:30:37.140842954Z" level=info msg="StopContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" with timeout 30 (s)" Dec 13 13:30:37.149504 containerd[1484]: time="2024-12-13T13:30:37.149470699Z" level=info msg="Stop container \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" with signal terminated" Dec 13 13:30:37.162585 systemd[1]: cri-containerd-bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce.scope: Deactivated successfully. Dec 13 13:30:37.173177 containerd[1484]: time="2024-12-13T13:30:37.173122453Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:30:37.191741 containerd[1484]: time="2024-12-13T13:30:37.191690916Z" level=info msg="StopContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" with timeout 2 (s)" Dec 13 13:30:37.192124 containerd[1484]: time="2024-12-13T13:30:37.191995920Z" level=info msg="Stop container \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" with signal terminated" Dec 13 13:30:37.192274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce-rootfs.mount: Deactivated successfully. Dec 13 13:30:37.198261 systemd-networkd[1427]: lxc_health: Link DOWN Dec 13 13:30:37.198268 systemd-networkd[1427]: lxc_health: Lost carrier Dec 13 13:30:37.203660 containerd[1484]: time="2024-12-13T13:30:37.203608048Z" level=info msg="shim disconnected" id=bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce namespace=k8s.io Dec 13 13:30:37.203660 containerd[1484]: time="2024-12-13T13:30:37.203659486Z" level=warning msg="cleaning up after shim disconnected" id=bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce namespace=k8s.io Dec 13 13:30:37.203901 containerd[1484]: time="2024-12-13T13:30:37.203668834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:37.221404 containerd[1484]: time="2024-12-13T13:30:37.221335381Z" level=info msg="StopContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" returns successfully" Dec 13 13:30:37.224909 containerd[1484]: time="2024-12-13T13:30:37.224890276Z" level=info msg="StopPodSandbox for \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\"" Dec 13 13:30:37.224970 containerd[1484]: time="2024-12-13T13:30:37.224920844Z" level=info msg="Container to stop \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.227239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3-shm.mount: Deactivated successfully. Dec 13 13:30:37.229607 systemd[1]: cri-containerd-be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0.scope: Deactivated successfully. Dec 13 13:30:37.230204 systemd[1]: cri-containerd-be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0.scope: Consumed 6.868s CPU time. Dec 13 13:30:37.234501 systemd[1]: cri-containerd-c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3.scope: Deactivated successfully. Dec 13 13:30:37.248464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0-rootfs.mount: Deactivated successfully. Dec 13 13:30:37.255384 containerd[1484]: time="2024-12-13T13:30:37.255290306Z" level=info msg="shim disconnected" id=c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3 namespace=k8s.io Dec 13 13:30:37.255384 containerd[1484]: time="2024-12-13T13:30:37.255341344Z" level=warning msg="cleaning up after shim disconnected" id=c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3 namespace=k8s.io Dec 13 13:30:37.255384 containerd[1484]: time="2024-12-13T13:30:37.255349460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:37.255690 containerd[1484]: time="2024-12-13T13:30:37.255329882Z" level=info msg="shim disconnected" id=be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0 namespace=k8s.io Dec 13 13:30:37.255690 containerd[1484]: time="2024-12-13T13:30:37.255516079Z" level=warning msg="cleaning up after shim disconnected" id=be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0 namespace=k8s.io Dec 13 13:30:37.255690 containerd[1484]: time="2024-12-13T13:30:37.255526148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:37.255953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3-rootfs.mount: Deactivated successfully. Dec 13 13:30:37.269272 containerd[1484]: time="2024-12-13T13:30:37.269227334Z" level=info msg="TearDown network for sandbox \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\" successfully" Dec 13 13:30:37.269272 containerd[1484]: time="2024-12-13T13:30:37.269262783Z" level=info msg="StopPodSandbox for \"c46ca81ba8dd3baf3b044ff3fce28f4664dc9f45bd21265356b9d0e3dd92fbb3\" returns successfully" Dec 13 13:30:37.273372 containerd[1484]: time="2024-12-13T13:30:37.273348383Z" level=info msg="StopContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" returns successfully" Dec 13 13:30:37.273646 containerd[1484]: time="2024-12-13T13:30:37.273627207Z" level=info msg="StopPodSandbox for \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\"" Dec 13 13:30:37.273733 containerd[1484]: time="2024-12-13T13:30:37.273651694Z" level=info msg="Container to stop \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.273733 containerd[1484]: time="2024-12-13T13:30:37.273680388Z" level=info msg="Container to stop \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.273733 containerd[1484]: time="2024-12-13T13:30:37.273688143Z" level=info msg="Container to stop \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.273733 containerd[1484]: time="2024-12-13T13:30:37.273696129Z" level=info msg="Container to stop \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.273884 containerd[1484]: time="2024-12-13T13:30:37.273752807Z" level=info msg="Container to stop \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:37.280381 systemd[1]: cri-containerd-7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d.scope: Deactivated successfully. Dec 13 13:30:37.291233 kubelet[2680]: I1213 13:30:37.291193 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dcdb5e-1c91-43a5-9edd-304ce97096b8-cilium-config-path\") pod \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\" (UID: \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\") " Dec 13 13:30:37.291233 kubelet[2680]: I1213 13:30:37.291233 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfzt\" (UniqueName: \"kubernetes.io/projected/65dcdb5e-1c91-43a5-9edd-304ce97096b8-kube-api-access-xhfzt\") pod \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\" (UID: \"65dcdb5e-1c91-43a5-9edd-304ce97096b8\") " Dec 13 13:30:37.295552 kubelet[2680]: I1213 13:30:37.295514 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dcdb5e-1c91-43a5-9edd-304ce97096b8-kube-api-access-xhfzt" (OuterVolumeSpecName: "kube-api-access-xhfzt") pod "65dcdb5e-1c91-43a5-9edd-304ce97096b8" (UID: "65dcdb5e-1c91-43a5-9edd-304ce97096b8"). InnerVolumeSpecName "kube-api-access-xhfzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:37.296484 kubelet[2680]: I1213 13:30:37.296464 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65dcdb5e-1c91-43a5-9edd-304ce97096b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65dcdb5e-1c91-43a5-9edd-304ce97096b8" (UID: "65dcdb5e-1c91-43a5-9edd-304ce97096b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:30:37.303071 containerd[1484]: time="2024-12-13T13:30:37.302835516Z" level=info msg="shim disconnected" id=7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d namespace=k8s.io Dec 13 13:30:37.303071 containerd[1484]: time="2024-12-13T13:30:37.302894860Z" level=warning msg="cleaning up after shim disconnected" id=7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d namespace=k8s.io Dec 13 13:30:37.303071 containerd[1484]: time="2024-12-13T13:30:37.302908816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:37.317743 containerd[1484]: time="2024-12-13T13:30:37.317680021Z" level=info msg="TearDown network for sandbox \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" successfully" Dec 13 13:30:37.317743 containerd[1484]: time="2024-12-13T13:30:37.317730027Z" level=info msg="StopPodSandbox for \"7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d\" returns successfully" Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392289 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-hubble-tls\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392328 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4e200d-1707-49a7-a8de-c2dda3718c20-clustermesh-secrets\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392344 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-xtables-lock\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392359 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-cgroup\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392378 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-config-path\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392422 kubelet[2680]: I1213 13:30:37.392396 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-run\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392413 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-kernel\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392430 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-etc-cni-netd\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392448 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cni-path\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392465 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-bpf-maps\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392491 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g895m\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.392845 kubelet[2680]: I1213 13:30:37.392489 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392510 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-hostproc\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392526 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-lib-modules\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392545 2680 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-net\") pod \"7b4e200d-1707-49a7-a8de-c2dda3718c20\" (UID: \"7b4e200d-1707-49a7-a8de-c2dda3718c20\") " Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392580 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dcdb5e-1c91-43a5-9edd-304ce97096b8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392593 2680 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.393039 kubelet[2680]: I1213 13:30:37.392604 2680 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xhfzt\" (UniqueName: \"kubernetes.io/projected/65dcdb5e-1c91-43a5-9edd-304ce97096b8-kube-api-access-xhfzt\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.393239 kubelet[2680]: I1213 13:30:37.392634 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393239 kubelet[2680]: I1213 13:30:37.392659 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393239 kubelet[2680]: I1213 13:30:37.392764 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cni-path" (OuterVolumeSpecName: "cni-path") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393239 kubelet[2680]: I1213 13:30:37.392802 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393239 kubelet[2680]: I1213 13:30:37.392825 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.393408 kubelet[2680]: I1213 13:30:37.392849 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.396626 kubelet[2680]: I1213 13:30:37.396463 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.396626 kubelet[2680]: I1213 13:30:37.396518 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-hostproc" (OuterVolumeSpecName: "hostproc") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.396626 kubelet[2680]: I1213 13:30:37.396542 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:37.396815 kubelet[2680]: I1213 13:30:37.396731 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:30:37.396815 kubelet[2680]: I1213 13:30:37.396790 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4e200d-1707-49a7-a8de-c2dda3718c20-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:30:37.397205 kubelet[2680]: I1213 13:30:37.397182 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:37.397365 kubelet[2680]: I1213 13:30:37.397346 2680 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m" (OuterVolumeSpecName: "kube-api-access-g895m") pod "7b4e200d-1707-49a7-a8de-c2dda3718c20" (UID: "7b4e200d-1707-49a7-a8de-c2dda3718c20"). InnerVolumeSpecName "kube-api-access-g895m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:37.420571 kubelet[2680]: I1213 13:30:37.420551 2680 scope.go:117] "RemoveContainer" containerID="bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce" Dec 13 13:30:37.426265 containerd[1484]: time="2024-12-13T13:30:37.426193165Z" level=info msg="RemoveContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\"" Dec 13 13:30:37.426249 systemd[1]: Removed slice kubepods-besteffort-pod65dcdb5e_1c91_43a5_9edd_304ce97096b8.slice - libcontainer container kubepods-besteffort-pod65dcdb5e_1c91_43a5_9edd_304ce97096b8.slice. Dec 13 13:30:37.430519 systemd[1]: Removed slice kubepods-burstable-pod7b4e200d_1707_49a7_a8de_c2dda3718c20.slice - libcontainer container kubepods-burstable-pod7b4e200d_1707_49a7_a8de_c2dda3718c20.slice. Dec 13 13:30:37.430653 systemd[1]: kubepods-burstable-pod7b4e200d_1707_49a7_a8de_c2dda3718c20.slice: Consumed 6.969s CPU time. Dec 13 13:30:37.441540 containerd[1484]: time="2024-12-13T13:30:37.441488442Z" level=info msg="RemoveContainer for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" returns successfully" Dec 13 13:30:37.441807 kubelet[2680]: I1213 13:30:37.441779 2680 scope.go:117] "RemoveContainer" containerID="bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce" Dec 13 13:30:37.442075 containerd[1484]: time="2024-12-13T13:30:37.442030320Z" level=error msg="ContainerStatus for \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\": not found" Dec 13 13:30:37.442318 kubelet[2680]: E1213 13:30:37.442299 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\": not found" containerID="bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce" Dec 13 13:30:37.442405 kubelet[2680]: I1213 13:30:37.442324 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce"} err="failed to get container status \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd3ed65d39435b2aecc66fc5c76424108d1595bad28d9296ee64bac9fea08bce\": not found" Dec 13 13:30:37.442447 kubelet[2680]: I1213 13:30:37.442408 2680 scope.go:117] "RemoveContainer" containerID="be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0" Dec 13 13:30:37.443425 containerd[1484]: time="2024-12-13T13:30:37.443402767Z" level=info msg="RemoveContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\"" Dec 13 13:30:37.447536 containerd[1484]: time="2024-12-13T13:30:37.447489540Z" level=info msg="RemoveContainer for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" returns successfully" Dec 13 13:30:37.447753 kubelet[2680]: I1213 13:30:37.447730 2680 scope.go:117] "RemoveContainer" containerID="f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a" Dec 13 13:30:37.448977 containerd[1484]: time="2024-12-13T13:30:37.448610295Z" level=info msg="RemoveContainer for \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\"" Dec 13 13:30:37.452307 containerd[1484]: time="2024-12-13T13:30:37.452274188Z" level=info msg="RemoveContainer for \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\" returns successfully" Dec 13 13:30:37.452499 kubelet[2680]: I1213 13:30:37.452437 2680 scope.go:117] "RemoveContainer" containerID="65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e" Dec 13 13:30:37.453382 containerd[1484]: time="2024-12-13T13:30:37.453312466Z" level=info msg="RemoveContainer for \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\"" Dec 13 13:30:37.456587 containerd[1484]: time="2024-12-13T13:30:37.456556365Z" level=info msg="RemoveContainer for \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\" returns successfully" Dec 13 13:30:37.456692 kubelet[2680]: I1213 13:30:37.456670 2680 scope.go:117] "RemoveContainer" containerID="66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52" Dec 13 13:30:37.457637 containerd[1484]: time="2024-12-13T13:30:37.457610763Z" level=info msg="RemoveContainer for \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\"" Dec 13 13:30:37.460694 containerd[1484]: time="2024-12-13T13:30:37.460658527Z" level=info msg="RemoveContainer for \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\" returns successfully" Dec 13 13:30:37.460866 kubelet[2680]: I1213 13:30:37.460790 2680 scope.go:117] "RemoveContainer" containerID="b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403" Dec 13 13:30:37.462079 containerd[1484]: time="2024-12-13T13:30:37.462042416Z" level=info msg="RemoveContainer for \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\"" Dec 13 13:30:37.467432 containerd[1484]: time="2024-12-13T13:30:37.467390243Z" level=info msg="RemoveContainer for \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\" returns successfully" Dec 13 13:30:37.467592 kubelet[2680]: I1213 13:30:37.467560 2680 scope.go:117] "RemoveContainer" containerID="be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0" Dec 13 13:30:37.467765 containerd[1484]: time="2024-12-13T13:30:37.467734392Z" level=error msg="ContainerStatus for \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\": not found" Dec 13 13:30:37.467937 kubelet[2680]: E1213 13:30:37.467905 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\": not found" containerID="be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0" Dec 13 13:30:37.467998 kubelet[2680]: I1213 13:30:37.467945 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0"} err="failed to get container status \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"be26202caed8f64aab85e479eb87c3132fb2f98af30a2dfc9d3ea64d8ddd24c0\": not found" Dec 13 13:30:37.467998 kubelet[2680]: I1213 13:30:37.467978 2680 scope.go:117] "RemoveContainer" containerID="f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a" Dec 13 13:30:37.468235 containerd[1484]: time="2024-12-13T13:30:37.468190455Z" level=error msg="ContainerStatus for \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\": not found" Dec 13 13:30:37.468357 kubelet[2680]: E1213 13:30:37.468316 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\": not found" containerID="f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a" Dec 13 13:30:37.468357 kubelet[2680]: I1213 13:30:37.468339 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a"} err="failed to get container status \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2cc6c5811c9b8c0034e607afa7e9e5c2f9a4427c9cc86d407b9c83d3844e41a\": not found" Dec 13 13:30:37.468427 kubelet[2680]: I1213 13:30:37.468361 2680 scope.go:117] "RemoveContainer" containerID="65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e" Dec 13 13:30:37.468532 containerd[1484]: time="2024-12-13T13:30:37.468502532Z" level=error msg="ContainerStatus for \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\": not found" Dec 13 13:30:37.468647 kubelet[2680]: E1213 13:30:37.468627 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\": not found" containerID="65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e" Dec 13 13:30:37.468682 kubelet[2680]: I1213 13:30:37.468653 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e"} err="failed to get container status \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"65cbc46aad201f3080b16572bc4f6a322988131ecac9e30394de184f170b0d6e\": not found" Dec 13 13:30:37.468682 kubelet[2680]: I1213 13:30:37.468670 2680 scope.go:117] "RemoveContainer" containerID="66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52" Dec 13 13:30:37.468886 containerd[1484]: time="2024-12-13T13:30:37.468851801Z" level=error msg="ContainerStatus for \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\": not found" Dec 13 13:30:37.468982 kubelet[2680]: E1213 13:30:37.468961 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\": not found" containerID="66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52" Dec 13 13:30:37.469019 kubelet[2680]: I1213 13:30:37.468986 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52"} err="failed to get container status \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\": rpc error: code = NotFound desc = an error occurred when try to find container \"66a1f50f3814ea5be9fb3e5ef8f744d80b69b4830848c3ee3b1d7f0f4460ec52\": not found" Dec 13 13:30:37.469019 kubelet[2680]: I1213 13:30:37.469010 2680 scope.go:117] "RemoveContainer" containerID="b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403" Dec 13 13:30:37.473601 containerd[1484]: time="2024-12-13T13:30:37.469151825Z" level=error msg="ContainerStatus for \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\": not found" Dec 13 13:30:37.473741 kubelet[2680]: E1213 13:30:37.473692 2680 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\": not found" containerID="b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403" Dec 13 13:30:37.473782 kubelet[2680]: I1213 13:30:37.473737 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403"} err="failed to get container status \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\": rpc error: code = NotFound desc = an error occurred when try to find container \"b95398bba9eed5b7a6fbb312e284c9728a13b82382eb0a0d77706ecccfc6b403\": not found" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493453 2680 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493490 2680 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4e200d-1707-49a7-a8de-c2dda3718c20-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493502 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493513 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493521 2680 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493532 2680 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493523 kubelet[2680]: I1213 13:30:37.493540 2680 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493549 2680 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493557 2680 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g895m\" (UniqueName: \"kubernetes.io/projected/7b4e200d-1707-49a7-a8de-c2dda3718c20-kube-api-access-g895m\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493567 2680 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493577 2680 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493587 2680 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.493913 kubelet[2680]: I1213 13:30:37.493597 2680 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4e200d-1707-49a7-a8de-c2dda3718c20-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 13:30:37.560424 kubelet[2680]: E1213 13:30:37.560375 2680 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:30:38.147276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d-rootfs.mount: Deactivated successfully. Dec 13 13:30:38.147391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ed5828d11bf9528ca3eedf608f4de4820db4349ac5c1f04aa99c3d3223e5f0d-shm.mount: Deactivated successfully. Dec 13 13:30:38.147474 systemd[1]: var-lib-kubelet-pods-65dcdb5e\x2d1c91\x2d43a5\x2d9edd\x2d304ce97096b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhfzt.mount: Deactivated successfully. Dec 13 13:30:38.147567 systemd[1]: var-lib-kubelet-pods-7b4e200d\x2d1707\x2d49a7\x2da8de\x2dc2dda3718c20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg895m.mount: Deactivated successfully. Dec 13 13:30:38.147648 systemd[1]: var-lib-kubelet-pods-7b4e200d\x2d1707\x2d49a7\x2da8de\x2dc2dda3718c20-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:30:38.147742 systemd[1]: var-lib-kubelet-pods-7b4e200d\x2d1707\x2d49a7\x2da8de\x2dc2dda3718c20-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:30:38.512501 kubelet[2680]: I1213 13:30:38.512368 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65dcdb5e-1c91-43a5-9edd-304ce97096b8" path="/var/lib/kubelet/pods/65dcdb5e-1c91-43a5-9edd-304ce97096b8/volumes" Dec 13 13:30:38.513203 kubelet[2680]: I1213 13:30:38.513174 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" path="/var/lib/kubelet/pods/7b4e200d-1707-49a7-a8de-c2dda3718c20/volumes" Dec 13 13:30:39.106887 sshd[4341]: Connection closed by 10.0.0.1 port 59462 Dec 13 13:30:39.107370 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:39.126022 systemd[1]: sshd@26-10.0.0.99:22-10.0.0.1:59462.service: Deactivated successfully. Dec 13 13:30:39.128130 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:30:39.129909 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:30:39.135034 systemd[1]: Started sshd@27-10.0.0.99:22-10.0.0.1:38324.service - OpenSSH per-connection server daemon (10.0.0.1:38324). Dec 13 13:30:39.136080 systemd-logind[1471]: Removed session 27. Dec 13 13:30:39.178388 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 38324 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:39.180118 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:39.185121 systemd-logind[1471]: New session 28 of user core. Dec 13 13:30:39.203995 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:30:39.789794 sshd[4506]: Connection closed by 10.0.0.1 port 38324 Dec 13 13:30:39.790591 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:39.801553 kubelet[2680]: I1213 13:30:39.801490 2680 topology_manager.go:215] "Topology Admit Handler" podUID="d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09" podNamespace="kube-system" podName="cilium-mhw6z" Dec 13 13:30:39.801553 kubelet[2680]: E1213 13:30:39.801556 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="apply-sysctl-overwrites" Dec 13 13:30:39.801553 kubelet[2680]: E1213 13:30:39.801566 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65dcdb5e-1c91-43a5-9edd-304ce97096b8" containerName="cilium-operator" Dec 13 13:30:39.802144 kubelet[2680]: E1213 13:30:39.801576 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="clean-cilium-state" Dec 13 13:30:39.802144 kubelet[2680]: E1213 13:30:39.801583 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="cilium-agent" Dec 13 13:30:39.802144 kubelet[2680]: E1213 13:30:39.801591 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="mount-cgroup" Dec 13 13:30:39.802144 kubelet[2680]: E1213 13:30:39.801597 2680 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="mount-bpf-fs" Dec 13 13:30:39.803290 kubelet[2680]: I1213 13:30:39.802641 2680 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4e200d-1707-49a7-a8de-c2dda3718c20" containerName="cilium-agent" Dec 13 13:30:39.803290 kubelet[2680]: I1213 13:30:39.802663 2680 memory_manager.go:354] "RemoveStaleState removing state" podUID="65dcdb5e-1c91-43a5-9edd-304ce97096b8" containerName="cilium-operator" Dec 13 13:30:39.804844 systemd[1]: sshd@27-10.0.0.99:22-10.0.0.1:38324.service: Deactivated successfully. Dec 13 13:30:39.810137 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:30:39.811010 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:30:39.819093 systemd-logind[1471]: Removed session 28. Dec 13 13:30:39.829163 systemd[1]: Started sshd@28-10.0.0.99:22-10.0.0.1:38332.service - OpenSSH per-connection server daemon (10.0.0.1:38332). Dec 13 13:30:39.837606 systemd[1]: Created slice kubepods-burstable-podd837f7ca_cfc2_470f_bbf8_b7f9fbb13d09.slice - libcontainer container kubepods-burstable-podd837f7ca_cfc2_470f_bbf8_b7f9fbb13d09.slice. Dec 13 13:30:39.875529 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:39.877779 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:39.882978 systemd-logind[1471]: New session 29 of user core. Dec 13 13:30:39.889946 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:30:39.906908 kubelet[2680]: I1213 13:30:39.906861 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-hostproc\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.906918 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-cni-path\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.906942 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-xtables-lock\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.906961 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-host-proc-sys-net\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.907008 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-cilium-ipsec-secrets\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.907044 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-cilium-cgroup\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907207 kubelet[2680]: I1213 13:30:39.907077 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-etc-cni-netd\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907097 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-bpf-maps\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907119 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-cilium-config-path\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907145 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-cilium-run\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907165 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-clustermesh-secrets\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907185 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-hubble-tls\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907395 kubelet[2680]: I1213 13:30:39.907206 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt865\" (UniqueName: \"kubernetes.io/projected/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-kube-api-access-kt865\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907525 kubelet[2680]: I1213 13:30:39.907246 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-host-proc-sys-kernel\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.907525 kubelet[2680]: I1213 13:30:39.907273 2680 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09-lib-modules\") pod \"cilium-mhw6z\" (UID: \"d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09\") " pod="kube-system/cilium-mhw6z" Dec 13 13:30:39.940540 sshd[4521]: Connection closed by 10.0.0.1 port 38332 Dec 13 13:30:39.941024 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:39.952926 systemd[1]: sshd@28-10.0.0.99:22-10.0.0.1:38332.service: Deactivated successfully. Dec 13 13:30:39.954668 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:30:39.956361 systemd-logind[1471]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:30:39.965260 systemd[1]: Started sshd@29-10.0.0.99:22-10.0.0.1:38334.service - OpenSSH per-connection server daemon (10.0.0.1:38334). Dec 13 13:30:39.966448 systemd-logind[1471]: Removed session 29. Dec 13 13:30:40.011388 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 38334 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:30:40.014508 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:40.025081 systemd-logind[1471]: New session 30 of user core. Dec 13 13:30:40.031843 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 13:30:40.142030 kubelet[2680]: E1213 13:30:40.141984 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:40.143039 containerd[1484]: time="2024-12-13T13:30:40.142957664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhw6z,Uid:d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:40.166743 containerd[1484]: time="2024-12-13T13:30:40.166614874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:40.166743 containerd[1484]: time="2024-12-13T13:30:40.166692343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:40.166743 containerd[1484]: time="2024-12-13T13:30:40.166726758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:40.166913 containerd[1484]: time="2024-12-13T13:30:40.166837029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:40.188973 systemd[1]: Started cri-containerd-6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d.scope - libcontainer container 6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d. Dec 13 13:30:40.211774 containerd[1484]: time="2024-12-13T13:30:40.211728028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhw6z,Uid:d837f7ca-cfc2-470f-bbf8-b7f9fbb13d09,Namespace:kube-system,Attempt:0,} returns sandbox id \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\"" Dec 13 13:30:40.212574 kubelet[2680]: E1213 13:30:40.212533 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:40.214486 containerd[1484]: time="2024-12-13T13:30:40.214442897Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:30:40.492485 containerd[1484]: time="2024-12-13T13:30:40.492361016Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5\"" Dec 13 13:30:40.493024 containerd[1484]: time="2024-12-13T13:30:40.492995800Z" level=info msg="StartContainer for \"faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5\"" Dec 13 13:30:40.522900 systemd[1]: Started cri-containerd-faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5.scope - libcontainer container faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5. Dec 13 13:30:40.548827 containerd[1484]: time="2024-12-13T13:30:40.548771565Z" level=info msg="StartContainer for \"faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5\" returns successfully" Dec 13 13:30:40.559324 systemd[1]: cri-containerd-faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5.scope: Deactivated successfully. Dec 13 13:30:40.596200 containerd[1484]: time="2024-12-13T13:30:40.596134830Z" level=info msg="shim disconnected" id=faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5 namespace=k8s.io Dec 13 13:30:40.596200 containerd[1484]: time="2024-12-13T13:30:40.596190366Z" level=warning msg="cleaning up after shim disconnected" id=faa4fd8c0aca1e9ff672a62f5f75657753b912b2a9f917ad603560c4895befc5 namespace=k8s.io Dec 13 13:30:40.596200 containerd[1484]: time="2024-12-13T13:30:40.596202209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:41.435810 kubelet[2680]: E1213 13:30:41.435775 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:41.437478 containerd[1484]: time="2024-12-13T13:30:41.437408345Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:30:41.451844 containerd[1484]: time="2024-12-13T13:30:41.451797177Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd\"" Dec 13 13:30:41.452242 containerd[1484]: time="2024-12-13T13:30:41.452192162Z" level=info msg="StartContainer for \"91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd\"" Dec 13 13:30:41.487945 systemd[1]: Started cri-containerd-91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd.scope - libcontainer container 91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd. Dec 13 13:30:41.515897 containerd[1484]: time="2024-12-13T13:30:41.515849669Z" level=info msg="StartContainer for \"91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd\" returns successfully" Dec 13 13:30:41.523445 systemd[1]: cri-containerd-91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd.scope: Deactivated successfully. Dec 13 13:30:41.561430 containerd[1484]: time="2024-12-13T13:30:41.561340337Z" level=info msg="shim disconnected" id=91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd namespace=k8s.io Dec 13 13:30:41.561430 containerd[1484]: time="2024-12-13T13:30:41.561404098Z" level=warning msg="cleaning up after shim disconnected" id=91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd namespace=k8s.io Dec 13 13:30:41.561430 containerd[1484]: time="2024-12-13T13:30:41.561414709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:42.014328 systemd[1]: run-containerd-runc-k8s.io-91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd-runc.Ceaauv.mount: Deactivated successfully. Dec 13 13:30:42.014477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91fee3eb7bfe228ba9289ccff5d70035dd07af6b48b175ca45237916b2e13ffd-rootfs.mount: Deactivated successfully. Dec 13 13:30:42.439116 kubelet[2680]: E1213 13:30:42.439056 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:42.441256 containerd[1484]: time="2024-12-13T13:30:42.441133504Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:30:42.459042 containerd[1484]: time="2024-12-13T13:30:42.458999936Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c\"" Dec 13 13:30:42.459891 containerd[1484]: time="2024-12-13T13:30:42.459522174Z" level=info msg="StartContainer for \"4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c\"" Dec 13 13:30:42.489907 systemd[1]: Started cri-containerd-4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c.scope - libcontainer container 4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c. Dec 13 13:30:42.522607 containerd[1484]: time="2024-12-13T13:30:42.522542663Z" level=info msg="StartContainer for \"4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c\" returns successfully" Dec 13 13:30:42.525594 systemd[1]: cri-containerd-4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c.scope: Deactivated successfully. Dec 13 13:30:42.556516 containerd[1484]: time="2024-12-13T13:30:42.556434652Z" level=info msg="shim disconnected" id=4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c namespace=k8s.io Dec 13 13:30:42.556516 containerd[1484]: time="2024-12-13T13:30:42.556498103Z" level=warning msg="cleaning up after shim disconnected" id=4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c namespace=k8s.io Dec 13 13:30:42.556516 containerd[1484]: time="2024-12-13T13:30:42.556511268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:42.561467 kubelet[2680]: E1213 13:30:42.561416 2680 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:30:43.013614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4463d7962c6ec9e41820d1cd11685b4a46f5600a72bca2cd8b767e515842653c-rootfs.mount: Deactivated successfully. Dec 13 13:30:43.443261 kubelet[2680]: E1213 13:30:43.443225 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:43.445172 containerd[1484]: time="2024-12-13T13:30:43.445137569Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:30:43.473878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850995261.mount: Deactivated successfully. Dec 13 13:30:43.475027 containerd[1484]: time="2024-12-13T13:30:43.474988636Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df\"" Dec 13 13:30:43.475462 containerd[1484]: time="2024-12-13T13:30:43.475437914Z" level=info msg="StartContainer for \"6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df\"" Dec 13 13:30:43.503836 systemd[1]: Started cri-containerd-6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df.scope - libcontainer container 6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df. Dec 13 13:30:43.526863 systemd[1]: cri-containerd-6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df.scope: Deactivated successfully. Dec 13 13:30:43.527994 containerd[1484]: time="2024-12-13T13:30:43.527957277Z" level=info msg="StartContainer for \"6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df\" returns successfully" Dec 13 13:30:43.549373 containerd[1484]: time="2024-12-13T13:30:43.549309519Z" level=info msg="shim disconnected" id=6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df namespace=k8s.io Dec 13 13:30:43.549373 containerd[1484]: time="2024-12-13T13:30:43.549372710Z" level=warning msg="cleaning up after shim disconnected" id=6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df namespace=k8s.io Dec 13 13:30:43.549589 containerd[1484]: time="2024-12-13T13:30:43.549383901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:44.014201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ce49200578aa156568714e9e08ec9b3c6374a21d51145b5248af76547b878df-rootfs.mount: Deactivated successfully. Dec 13 13:30:44.446530 kubelet[2680]: E1213 13:30:44.446499 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:44.448914 containerd[1484]: time="2024-12-13T13:30:44.448878479Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:30:44.465235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544604731.mount: Deactivated successfully. Dec 13 13:30:44.466668 containerd[1484]: time="2024-12-13T13:30:44.466626668Z" level=info msg="CreateContainer within sandbox \"6efa647b4ed4db9ab1ee90c7001b00044d3211644d88eb5d228f489133b6099d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165\"" Dec 13 13:30:44.467388 containerd[1484]: time="2024-12-13T13:30:44.467337054Z" level=info msg="StartContainer for \"bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165\"" Dec 13 13:30:44.497872 systemd[1]: Started cri-containerd-bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165.scope - libcontainer container bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165. Dec 13 13:30:44.528638 containerd[1484]: time="2024-12-13T13:30:44.528461377Z" level=info msg="StartContainer for \"bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165\" returns successfully" Dec 13 13:30:44.930743 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:30:44.962735 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Dec 13 13:30:44.980728 kernel: DRBG: Continuing without Jitter RNG Dec 13 13:30:45.243023 kubelet[2680]: I1213 13:30:45.242890 2680 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:30:45Z","lastTransitionTime":"2024-12-13T13:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:30:45.451174 kubelet[2680]: E1213 13:30:45.451123 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:45.484501 kubelet[2680]: I1213 13:30:45.484400 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mhw6z" podStartSLOduration=6.484380128 podStartE2EDuration="6.484380128s" podCreationTimestamp="2024-12-13 13:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:45.484252064 +0000 UTC m=+93.345409636" watchObservedRunningTime="2024-12-13 13:30:45.484380128 +0000 UTC m=+93.345537700" Dec 13 13:30:46.265788 systemd[1]: run-containerd-runc-k8s.io-bb556618d954e2e8fb8e16cf3e31b278861bb9d140fc14d0403b46f5ab231165-runc.ltrmc3.mount: Deactivated successfully. Dec 13 13:30:46.452502 kubelet[2680]: E1213 13:30:46.452468 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:47.454524 kubelet[2680]: E1213 13:30:47.454479 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:47.860472 systemd-networkd[1427]: lxc_health: Link UP Dec 13 13:30:47.868014 systemd-networkd[1427]: lxc_health: Gained carrier Dec 13 13:30:48.456814 kubelet[2680]: E1213 13:30:48.456772 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:48.515737 kubelet[2680]: E1213 13:30:48.511471 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:48.515737 kubelet[2680]: E1213 13:30:48.511508 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.358852 systemd-networkd[1427]: lxc_health: Gained IPv6LL Dec 13 13:30:49.457825 kubelet[2680]: E1213 13:30:49.457794 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.510952 kubelet[2680]: E1213 13:30:49.510488 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:50.459524 kubelet[2680]: E1213 13:30:50.459477 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:52.510571 kubelet[2680]: E1213 13:30:52.510510 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:53.510517 kubelet[2680]: E1213 13:30:53.510476 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:54.768376 sshd[4535]: Connection closed by 10.0.0.1 port 38334 Dec 13 13:30:54.768871 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:54.772516 systemd[1]: sshd@29-10.0.0.99:22-10.0.0.1:38334.service: Deactivated successfully. Dec 13 13:30:54.774439 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 13:30:54.775065 systemd-logind[1471]: Session 30 logged out. Waiting for processes to exit. Dec 13 13:30:54.775918 systemd-logind[1471]: Removed session 30.