Dec 13 01:04:51.937101 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:04:51.937123 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:04:51.937148 kernel: BIOS-provided physical RAM map: Dec 13 01:04:51.937155 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:04:51.937161 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:04:51.937176 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:04:51.937193 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:04:51.937207 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:04:51.937215 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:04:51.937242 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:04:51.937269 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:04:51.937285 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:04:51.937291 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:04:51.937306 kernel: NX (Execute Disable) protection: active Dec 13 01:04:51.937323 kernel: APIC: Static calls initialized Dec 13 01:04:51.937337 kernel: SMBIOS 2.8 present. Dec 13 01:04:51.937344 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:04:51.937351 kernel: Hypervisor detected: KVM Dec 13 01:04:51.937358 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:04:51.937365 kernel: kvm-clock: using sched offset of 2840344320 cycles Dec 13 01:04:51.937372 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:04:51.937379 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:04:51.937386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:04:51.937394 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:04:51.937404 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:04:51.937411 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:04:51.937418 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:04:51.937425 kernel: Using GB pages for direct mapping Dec 13 01:04:51.937432 kernel: ACPI: Early table checksum verification disabled Dec 13 01:04:51.937439 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:04:51.937461 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937468 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937475 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937485 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:04:51.937492 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937499 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937506 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937513 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:04:51.937520 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:04:51.937527 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:04:51.937541 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:04:51.937551 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:04:51.937558 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:04:51.937565 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:04:51.937572 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:04:51.937580 kernel: No NUMA configuration found Dec 13 01:04:51.937587 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:04:51.937597 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:04:51.937604 kernel: Zone ranges: Dec 13 01:04:51.937611 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:04:51.937618 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:04:51.937626 kernel: Normal empty Dec 13 01:04:51.937633 kernel: Movable zone start for each node Dec 13 01:04:51.937640 kernel: Early memory node ranges Dec 13 01:04:51.937647 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:04:51.937654 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:04:51.937661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:04:51.937674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:04:51.937681 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:04:51.937688 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:04:51.937695 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:04:51.937702 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:04:51.937710 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:04:51.937717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:04:51.937724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:04:51.937731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:04:51.937741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:04:51.937748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:04:51.937755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:04:51.937763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:04:51.937770 kernel: TSC deadline timer available Dec 13 01:04:51.937777 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:04:51.937784 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:04:51.937798 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:04:51.937808 kernel: kvm-guest: setup PV sched yield Dec 13 01:04:51.937818 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:04:51.937825 kernel: Booting paravirtualized kernel on KVM Dec 13 01:04:51.937833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:04:51.937840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:04:51.937847 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:04:51.937854 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:04:51.937861 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:04:51.937868 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:04:51.937875 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:04:51.937887 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:04:51.937894 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:04:51.937901 kernel: random: crng init done Dec 13 01:04:51.937909 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:04:51.937916 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:04:51.937923 kernel: Fallback order for Node 0: 0 Dec 13 01:04:51.937930 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:04:51.937937 kernel: Policy zone: DMA32 Dec 13 01:04:51.937947 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:04:51.937955 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:04:51.937962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:04:51.937970 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:04:51.937977 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:04:51.937984 kernel: Dynamic Preempt: voluntary Dec 13 01:04:51.937991 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:04:51.937999 kernel: rcu: RCU event tracing is enabled. Dec 13 01:04:51.938006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:04:51.938016 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:04:51.938024 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:04:51.938031 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:04:51.938041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:04:51.938048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:04:51.938055 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:04:51.938062 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:04:51.938069 kernel: Console: colour VGA+ 80x25 Dec 13 01:04:51.938076 kernel: printk: console [ttyS0] enabled Dec 13 01:04:51.938086 kernel: ACPI: Core revision 20230628 Dec 13 01:04:51.938094 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:04:51.938101 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:04:51.938108 kernel: x2apic enabled Dec 13 01:04:51.938115 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:04:51.938122 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:04:51.938130 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:04:51.938137 kernel: kvm-guest: setup PV IPIs Dec 13 01:04:51.938156 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:04:51.938163 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:04:51.938171 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:04:51.938178 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:04:51.938188 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:04:51.938196 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:04:51.938203 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:04:51.938211 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:04:51.938219 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:04:51.938229 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:04:51.938236 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:04:51.938246 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:04:51.938253 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:04:51.938261 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:04:51.938269 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:04:51.938277 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:04:51.938284 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:04:51.938294 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:04:51.938302 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:04:51.938309 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:04:51.938317 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:04:51.938324 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:04:51.938332 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:04:51.938339 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:04:51.938347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:04:51.938354 kernel: landlock: Up and running. Dec 13 01:04:51.938365 kernel: SELinux: Initializing. Dec 13 01:04:51.938372 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:04:51.938380 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:04:51.938387 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:04:51.938395 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:04:51.938403 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:04:51.938412 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:04:51.938420 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:04:51.938427 kernel: ... version: 0 Dec 13 01:04:51.938438 kernel: ... bit width: 48 Dec 13 01:04:51.938645 kernel: ... generic registers: 6 Dec 13 01:04:51.938663 kernel: ... value mask: 0000ffffffffffff Dec 13 01:04:51.938671 kernel: ... max period: 00007fffffffffff Dec 13 01:04:51.938678 kernel: ... fixed-purpose events: 0 Dec 13 01:04:51.938686 kernel: ... event mask: 000000000000003f Dec 13 01:04:51.938693 kernel: signal: max sigframe size: 1776 Dec 13 01:04:51.938701 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:04:51.938709 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:04:51.938720 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:04:51.938727 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:04:51.938735 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:04:51.938742 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:04:51.938750 kernel: smpboot: Max logical packages: 1 Dec 13 01:04:51.938758 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:04:51.938765 kernel: devtmpfs: initialized Dec 13 01:04:51.938773 kernel: x86/mm: Memory block size: 128MB Dec 13 01:04:51.938780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:04:51.938798 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:04:51.938806 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:04:51.938813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:04:51.938821 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:04:51.938828 kernel: audit: type=2000 audit(1734051891.800:1): state=initialized audit_enabled=0 res=1 Dec 13 01:04:51.938836 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:04:51.938843 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:04:51.938851 kernel: cpuidle: using governor menu Dec 13 01:04:51.938858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:04:51.938868 kernel: dca service started, version 1.12.1 Dec 13 01:04:51.938876 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:04:51.938884 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:04:51.938891 kernel: PCI: Using configuration type 1 for base access Dec 13 01:04:51.938899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:04:51.938915 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:04:51.938925 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:04:51.938941 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:04:51.938950 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:04:51.938961 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:04:51.938969 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:04:51.938976 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:04:51.938984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:04:51.938991 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:04:51.938999 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:04:51.939006 kernel: ACPI: Interpreter enabled Dec 13 01:04:51.939014 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:04:51.939021 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:04:51.939032 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:04:51.939039 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:04:51.939047 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:04:51.939054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:04:51.939281 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:04:51.939417 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:04:51.939563 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:04:51.939574 kernel: PCI host bridge to bus 0000:00 Dec 13 01:04:51.939718 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:04:51.939847 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:04:51.939964 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:04:51.940132 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:04:51.940288 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:04:51.940404 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:04:51.940545 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:04:51.940694 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:04:51.940839 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:04:51.940971 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:04:51.941142 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:04:51.941268 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:04:51.941393 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:04:51.941566 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:04:51.941691 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:04:51.941825 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:04:51.941951 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:04:51.942094 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:04:51.942223 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:04:51.942348 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:04:51.942510 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:04:51.942653 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:04:51.942785 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:04:51.942925 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:04:51.943051 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:04:51.943178 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:04:51.943314 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:04:51.943463 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:04:51.943618 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:04:51.943747 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:04:51.943886 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:04:51.944024 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:04:51.944150 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:04:51.944166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:04:51.944173 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:04:51.944181 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:04:51.944189 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:04:51.944196 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:04:51.944204 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:04:51.944211 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:04:51.944219 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:04:51.944226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:04:51.944237 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:04:51.944244 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:04:51.944252 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:04:51.944259 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:04:51.944267 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:04:51.944274 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:04:51.944282 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:04:51.944289 kernel: iommu: Default domain type: Translated Dec 13 01:04:51.944297 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:04:51.944307 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:04:51.944314 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:04:51.944322 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:04:51.944329 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:04:51.944478 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:04:51.944612 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:04:51.944738 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:04:51.944749 kernel: vgaarb: loaded Dec 13 01:04:51.944761 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:04:51.944768 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:04:51.944776 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:04:51.944783 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:04:51.944799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:04:51.944807 kernel: pnp: PnP ACPI init Dec 13 01:04:51.944953 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:04:51.944965 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:04:51.944972 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:04:51.944984 kernel: NET: Registered PF_INET protocol family Dec 13 01:04:51.944992 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:04:51.944999 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:04:51.945007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:04:51.945015 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:04:51.945022 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:04:51.945030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:04:51.945037 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:04:51.945047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:04:51.945055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:04:51.945062 kernel: NET: Registered PF_XDP protocol family Dec 13 01:04:51.945179 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:04:51.945296 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:04:51.945411 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:04:51.945640 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:04:51.945756 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:04:51.945879 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:04:51.945896 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:04:51.945903 kernel: Initialise system trusted keyrings Dec 13 01:04:51.945911 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:04:51.945918 kernel: Key type asymmetric registered Dec 13 01:04:51.945926 kernel: Asymmetric key parser 'x509' registered Dec 13 01:04:51.945934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:04:51.945941 kernel: io scheduler mq-deadline registered Dec 13 01:04:51.945949 kernel: io scheduler kyber registered Dec 13 01:04:51.945956 kernel: io scheduler bfq registered Dec 13 01:04:51.945967 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:04:51.945975 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:04:51.945983 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:04:51.945990 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:04:51.945998 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:04:51.946005 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:04:51.946013 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:04:51.946021 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:04:51.946028 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:04:51.946158 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:04:51.946276 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:04:51.946286 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 01:04:51.946401 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:04:51 UTC (1734051891) Dec 13 01:04:51.946539 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:04:51.946550 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:04:51.946558 kernel: hpet: Lost 1 RTC interrupts Dec 13 01:04:51.946565 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:04:51.946577 kernel: Segment Routing with IPv6 Dec 13 01:04:51.946585 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:04:51.946592 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:04:51.946599 kernel: Key type dns_resolver registered Dec 13 01:04:51.946607 kernel: IPI shorthand broadcast: enabled Dec 13 01:04:51.946614 kernel: sched_clock: Marking stable (786003418, 118089692)->(979866400, -75773290) Dec 13 01:04:51.946622 kernel: registered taskstats version 1 Dec 13 01:04:51.946630 kernel: Loading compiled-in X.509 certificates Dec 13 01:04:51.946637 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:04:51.946647 kernel: Key type .fscrypt registered Dec 13 01:04:51.946655 kernel: Key type fscrypt-provisioning registered Dec 13 01:04:51.946662 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:04:51.946670 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:04:51.946677 kernel: ima: No architecture policies found Dec 13 01:04:51.946685 kernel: clk: Disabling unused clocks Dec 13 01:04:51.946692 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:04:51.946700 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:04:51.946710 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:04:51.946718 kernel: Run /init as init process Dec 13 01:04:51.946725 kernel: with arguments: Dec 13 01:04:51.946733 kernel: /init Dec 13 01:04:51.946740 kernel: with environment: Dec 13 01:04:51.946747 kernel: HOME=/ Dec 13 01:04:51.946755 kernel: TERM=linux Dec 13 01:04:51.946762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:04:51.946771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:04:51.946783 systemd[1]: Detected virtualization kvm. Dec 13 01:04:51.946800 systemd[1]: Detected architecture x86-64. Dec 13 01:04:51.946808 systemd[1]: Running in initrd. Dec 13 01:04:51.946817 systemd[1]: No hostname configured, using default hostname. Dec 13 01:04:51.946824 systemd[1]: Hostname set to . Dec 13 01:04:51.946832 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:04:51.946840 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:04:51.946848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:04:51.946860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:04:51.946884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:04:51.946896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:04:51.946904 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:04:51.946913 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:04:51.946925 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:04:51.946934 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:04:51.946942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:04:51.946950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:04:51.946958 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:04:51.946966 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:04:51.946976 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:04:51.946987 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:04:51.947001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:04:51.947010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:04:51.947018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:04:51.947026 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:04:51.947035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:04:51.947043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:04:51.947051 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:04:51.947059 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:04:51.947070 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:04:51.947079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:04:51.947087 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:04:51.947095 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:04:51.947103 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:04:51.947111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:04:51.947120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:04:51.947128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:04:51.947136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:04:51.947147 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:04:51.947174 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:04:51.947196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:04:51.947208 systemd-journald[193]: Journal started Dec 13 01:04:51.947228 systemd-journald[193]: Runtime Journal (/run/log/journal/2999669f8aa2480bb3d4306bb837eaa3) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:04:51.938963 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:04:51.972167 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:04:51.972194 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:04:51.972207 kernel: Bridge firewalling registered Dec 13 01:04:51.965805 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:04:51.972476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:04:51.976371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:04:51.988620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:04:51.989594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:04:51.992636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:04:52.004242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:04:52.005473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:04:52.019676 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:04:52.020225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:04:52.023015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:04:52.027242 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:04:52.030329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:04:52.041101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:04:52.045229 dracut-cmdline[225]: dracut-dracut-053 Dec 13 01:04:52.049265 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:04:52.064200 systemd-resolved[226]: Positive Trust Anchors: Dec 13 01:04:52.064219 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:04:52.064249 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:04:52.067118 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 13 01:04:52.068376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:04:52.073820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:04:52.154484 kernel: SCSI subsystem initialized Dec 13 01:04:52.164472 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:04:52.175474 kernel: iscsi: registered transport (tcp) Dec 13 01:04:52.196738 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:04:52.196780 kernel: QLogic iSCSI HBA Driver Dec 13 01:04:52.252179 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:04:52.263627 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:04:52.287926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:04:52.287974 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:04:52.288981 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:04:52.329478 kernel: raid6: avx2x4 gen() 30203 MB/s Dec 13 01:04:52.346467 kernel: raid6: avx2x2 gen() 31504 MB/s Dec 13 01:04:52.363547 kernel: raid6: avx2x1 gen() 26053 MB/s Dec 13 01:04:52.363562 kernel: raid6: using algorithm avx2x2 gen() 31504 MB/s Dec 13 01:04:52.381558 kernel: raid6: .... xor() 20007 MB/s, rmw enabled Dec 13 01:04:52.381578 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:04:52.401472 kernel: xor: automatically using best checksumming function avx Dec 13 01:04:52.554482 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:04:52.567550 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:04:52.575633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:04:52.588236 systemd-udevd[412]: Using default interface naming scheme 'v255'. Dec 13 01:04:52.592958 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:04:52.601574 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:04:52.616074 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Dec 13 01:04:52.648142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:04:52.665642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:04:52.729439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:04:52.742664 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:04:52.757471 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:04:52.789701 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:04:52.789889 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:04:52.789906 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:04:52.789923 kernel: GPT:9289727 != 19775487 Dec 13 01:04:52.789952 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:04:52.790164 kernel: GPT:9289727 != 19775487 Dec 13 01:04:52.790187 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:04:52.790205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:04:52.759407 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:04:52.762278 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:04:52.764138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:04:52.765518 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:04:52.773647 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:04:52.785522 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:04:52.806521 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:04:52.810480 kernel: AES CTR mode by8 optimization enabled Dec 13 01:04:52.810509 kernel: libata version 3.00 loaded. Dec 13 01:04:52.810894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:04:52.812351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:04:52.816941 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:04:52.823179 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (457) Dec 13 01:04:52.820307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:04:52.827663 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Dec 13 01:04:52.827685 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:04:52.845361 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:04:52.845378 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:04:52.845569 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:04:52.845720 kernel: scsi host0: ahci Dec 13 01:04:52.845900 kernel: scsi host1: ahci Dec 13 01:04:52.846054 kernel: scsi host2: ahci Dec 13 01:04:52.846204 kernel: scsi host3: ahci Dec 13 01:04:52.846354 kernel: scsi host4: ahci Dec 13 01:04:52.846544 kernel: scsi host5: ahci Dec 13 01:04:52.846712 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:04:52.846730 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:04:52.846740 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:04:52.846754 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:04:52.846765 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:04:52.846775 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:04:52.820439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:04:52.826269 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:04:52.835737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:04:52.851421 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:04:52.858539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:04:52.891168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:04:52.905130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:04:52.906508 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:04:52.914190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:04:52.923659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:04:52.926813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:04:52.934314 disk-uuid[551]: Primary Header is updated. Dec 13 01:04:52.934314 disk-uuid[551]: Secondary Entries is updated. Dec 13 01:04:52.934314 disk-uuid[551]: Secondary Header is updated. Dec 13 01:04:52.938479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:04:52.953767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:04:53.152486 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:04:53.152567 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:04:53.174463 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:04:53.174486 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:04:53.174497 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:04:53.175472 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:04:53.176560 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:04:53.176575 kernel: ata3.00: applying bridge limits Dec 13 01:04:53.177646 kernel: ata3.00: configured for UDMA/100 Dec 13 01:04:53.178478 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:04:53.223482 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:04:53.236237 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:04:53.236252 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:04:53.947210 disk-uuid[554]: The operation has completed successfully. Dec 13 01:04:53.948519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:04:53.982423 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:04:53.982566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:04:53.997591 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:04:54.002805 sh[592]: Success Dec 13 01:04:54.016479 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:04:54.053888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:04:54.067057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:04:54.069744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:04:54.083177 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:04:54.083206 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:04:54.083218 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:04:54.084194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:04:54.084929 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:04:54.089703 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:04:54.092082 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:04:54.107651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:04:54.108912 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:04:54.119164 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:04:54.119193 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:04:54.119205 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:04:54.123489 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:04:54.132849 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:04:54.134386 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:04:54.144372 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:04:54.152692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:04:54.208642 ignition[686]: Ignition 2.19.0 Dec 13 01:04:54.208655 ignition[686]: Stage: fetch-offline Dec 13 01:04:54.208695 ignition[686]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:54.208709 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:54.208826 ignition[686]: parsed url from cmdline: "" Dec 13 01:04:54.208832 ignition[686]: no config URL provided Dec 13 01:04:54.208839 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:04:54.208852 ignition[686]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:04:54.208888 ignition[686]: op(1): [started] loading QEMU firmware config module Dec 13 01:04:54.208895 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:04:54.218773 ignition[686]: op(1): [finished] loading QEMU firmware config module Dec 13 01:04:54.251959 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:04:54.263060 ignition[686]: parsing config with SHA512: 07c5bb9fe1ace17b8808f6741fc9c9fdb9a700d891d4b0cba1e6916e458823be9c83ea16c56a7470c998b344f396e5e8c1cf80ea6032473968f1ec4dffe02962 Dec 13 01:04:54.266614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:04:54.269747 unknown[686]: fetched base config from "system" Dec 13 01:04:54.269765 unknown[686]: fetched user config from "qemu" Dec 13 01:04:54.272195 ignition[686]: fetch-offline: fetch-offline passed Dec 13 01:04:54.273219 ignition[686]: Ignition finished successfully Dec 13 01:04:54.275419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:04:54.291244 systemd-networkd[780]: lo: Link UP Dec 13 01:04:54.291256 systemd-networkd[780]: lo: Gained carrier Dec 13 01:04:54.293227 systemd-networkd[780]: Enumeration completed Dec 13 01:04:54.293327 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:04:54.293751 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:04:54.293757 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:04:54.295685 systemd-networkd[780]: eth0: Link UP Dec 13 01:04:54.295690 systemd-networkd[780]: eth0: Gained carrier Dec 13 01:04:54.295701 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:04:54.295833 systemd[1]: Reached target network.target - Network. Dec 13 01:04:54.297758 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:04:54.307595 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:04:54.313525 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:04:54.323158 ignition[783]: Ignition 2.19.0 Dec 13 01:04:54.323171 ignition[783]: Stage: kargs Dec 13 01:04:54.323336 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:54.323347 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:54.324190 ignition[783]: kargs: kargs passed Dec 13 01:04:54.324232 ignition[783]: Ignition finished successfully Dec 13 01:04:54.327003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:04:54.339584 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:04:54.354279 ignition[793]: Ignition 2.19.0 Dec 13 01:04:54.354291 ignition[793]: Stage: disks Dec 13 01:04:54.354482 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:54.354494 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:54.355349 ignition[793]: disks: disks passed Dec 13 01:04:54.357552 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:04:54.355397 ignition[793]: Ignition finished successfully Dec 13 01:04:54.358978 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:04:54.360505 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:04:54.362792 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:04:54.363834 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:04:54.365582 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:04:54.376652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:04:54.389740 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:04:54.396194 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:04:54.405561 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:04:54.497469 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:04:54.498269 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:04:54.500491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:04:54.517540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:04:54.519292 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:04:54.520476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:04:54.520515 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:04:54.531379 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Dec 13 01:04:54.531398 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:04:54.531409 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:04:54.531420 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:04:54.520536 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:04:54.534611 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:04:54.526975 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:04:54.532236 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:04:54.536394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:04:54.566954 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:04:54.572003 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:04:54.576030 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:04:54.581005 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:04:54.662585 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:04:54.671596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:04:54.672633 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:04:54.683477 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:04:54.697291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:04:54.705276 ignition[926]: INFO : Ignition 2.19.0 Dec 13 01:04:54.705276 ignition[926]: INFO : Stage: mount Dec 13 01:04:54.707240 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:54.707240 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:54.707240 ignition[926]: INFO : mount: mount passed Dec 13 01:04:54.707240 ignition[926]: INFO : Ignition finished successfully Dec 13 01:04:54.713210 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:04:54.725565 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:04:55.082647 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:04:55.095598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:04:55.110982 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Dec 13 01:04:55.111010 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:04:55.111029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:04:55.112463 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:04:55.115472 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:04:55.116240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:04:55.136666 ignition[955]: INFO : Ignition 2.19.0 Dec 13 01:04:55.136666 ignition[955]: INFO : Stage: files Dec 13 01:04:55.138303 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:55.138303 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:55.138303 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:04:55.142025 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:04:55.142025 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:04:55.142025 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:04:55.142025 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:04:55.147596 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:04:55.147596 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:04:55.147596 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:04:55.142133 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 01:04:55.181701 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:04:55.287354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:04:55.287354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:04:55.291336 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:04:55.738645 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:04:55.950317 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:04:55.950317 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:04:55.954055 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:04:55.954055 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:04:55.957562 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:04:55.957562 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:04:55.961116 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:04:55.961116 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:04:55.964681 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:04:55.966620 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:04:55.968600 systemd-networkd[780]: eth0: Gained IPv6LL Dec 13 01:04:55.970106 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:04:55.970106 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:04:55.970106 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:04:55.970106 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:04:55.970106 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:04:56.271393 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:04:56.577069 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:04:56.577069 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:04:56.581207 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:04:56.583337 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:04:56.583337 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:04:56.583337 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:04:56.587628 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:04:56.589483 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:04:56.589483 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:04:56.589483 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:04:56.614674 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:04:56.621755 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:04:56.623402 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:04:56.623402 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:04:56.637217 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:04:56.638647 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:04:56.640431 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:04:56.640431 ignition[955]: INFO : files: files passed Dec 13 01:04:56.643178 ignition[955]: INFO : Ignition finished successfully Dec 13 01:04:56.643479 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:04:56.655577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:04:56.659362 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:04:56.662535 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:04:56.662664 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:04:56.675739 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:04:56.680097 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:04:56.680097 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:04:56.683559 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:04:56.685221 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:04:56.687075 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:04:56.695646 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:04:56.723762 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:04:56.723918 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:04:56.739572 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:04:56.741318 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:04:56.743495 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:04:56.744715 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:04:56.764719 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:04:56.777656 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:04:56.788055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:04:56.806825 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:04:56.809025 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:04:56.811061 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:04:56.811182 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:04:56.813497 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:04:56.815288 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:04:56.817490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:04:56.819823 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:04:56.821973 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:04:56.824159 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:04:56.826343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:04:56.828666 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:04:56.830739 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:04:56.832971 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:04:56.834752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:04:56.834900 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:04:56.837036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:04:56.838737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:04:56.840828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:04:56.840952 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:04:56.843309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:04:56.843421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:04:56.845760 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:04:56.845884 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:04:56.847927 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:04:56.849651 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:04:56.853534 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:04:56.855339 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:04:56.857353 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:04:56.859168 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:04:56.859300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:04:56.861189 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:04:56.861282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:04:56.863647 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:04:56.863760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:04:56.865685 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:04:56.865802 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:04:56.875651 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:04:56.877759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:04:56.879063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:04:56.879223 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:04:56.881950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:04:56.882102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:04:56.892046 ignition[1009]: INFO : Ignition 2.19.0 Dec 13 01:04:56.892046 ignition[1009]: INFO : Stage: umount Dec 13 01:04:56.892046 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:04:56.892046 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:04:56.892046 ignition[1009]: INFO : umount: umount passed Dec 13 01:04:56.892046 ignition[1009]: INFO : Ignition finished successfully Dec 13 01:04:56.888694 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:04:56.888878 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:04:56.892349 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:04:56.892517 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:04:56.896738 systemd[1]: Stopped target network.target - Network. Dec 13 01:04:56.899209 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:04:56.899265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:04:56.901375 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:04:56.901426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:04:56.903653 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:04:56.903703 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:04:56.906179 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:04:56.906231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:04:56.906657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:04:56.907049 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:04:56.908347 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:04:56.910101 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 13 01:04:56.912369 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:04:56.912510 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:04:56.914498 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:04:56.914618 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:04:56.918355 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:04:56.918418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:04:56.924547 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:04:56.926002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:04:56.926057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:04:56.928691 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:04:56.928745 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:04:56.931271 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:04:56.931320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:04:56.933769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:04:56.933829 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:04:56.935422 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:04:56.947219 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:04:56.947378 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:04:56.955326 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:04:56.955529 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:04:56.960172 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:04:56.960225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:04:56.962096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:04:56.962144 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:04:56.964159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:04:56.964210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:04:56.966698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:04:56.966763 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:04:56.968809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:04:56.968873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:04:56.982616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:04:56.984260 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:04:56.984320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:04:56.986588 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:04:56.986641 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:04:56.989130 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:04:56.989181 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:04:56.990508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:04:56.990571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:04:56.993108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:04:56.993224 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:04:57.071338 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:04:57.071504 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:04:57.072480 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:04:57.074392 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:04:57.074460 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:04:57.087592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:04:57.097866 systemd[1]: Switching root. Dec 13 01:04:57.134348 systemd-journald[193]: Journal stopped Dec 13 01:04:58.259623 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:04:58.259694 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:04:58.259713 kernel: SELinux: policy capability open_perms=1 Dec 13 01:04:58.259725 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:04:58.259737 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:04:58.259748 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:04:58.259760 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:04:58.259778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:04:58.259791 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:04:58.259802 kernel: audit: type=1403 audit(1734051897.505:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:04:58.259818 systemd[1]: Successfully loaded SELinux policy in 46.119ms. Dec 13 01:04:58.259844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.243ms. Dec 13 01:04:58.259859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:04:58.259872 systemd[1]: Detected virtualization kvm. Dec 13 01:04:58.259884 systemd[1]: Detected architecture x86-64. Dec 13 01:04:58.259897 systemd[1]: Detected first boot. Dec 13 01:04:58.259909 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:04:58.259922 zram_generator::config[1053]: No configuration found. Dec 13 01:04:58.259938 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:04:58.259950 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:04:58.259968 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:04:58.259981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:04:58.259995 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:04:58.260007 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:04:58.260020 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:04:58.260038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:04:58.260053 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:04:58.260065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:04:58.260077 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:04:58.260090 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:04:58.260102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:04:58.260115 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:04:58.260127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:04:58.260140 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:04:58.260152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:04:58.260168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:04:58.260180 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:04:58.260192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:04:58.260205 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:04:58.260217 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:04:58.260229 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:04:58.260241 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:04:58.260253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:04:58.260268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:04:58.260280 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:04:58.260292 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:04:58.260304 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:04:58.260317 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:04:58.260330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:04:58.260343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:04:58.260355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:04:58.260367 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:04:58.260382 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:04:58.260395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:04:58.260407 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:04:58.260420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:58.260433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:04:58.260491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:04:58.260505 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:04:58.260517 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:04:58.260529 systemd[1]: Reached target machines.target - Containers. Dec 13 01:04:58.260546 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:04:58.260558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:04:58.260570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:04:58.260583 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:04:58.260595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:04:58.260607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:04:58.260619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:04:58.260631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:04:58.260646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:04:58.260659 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:04:58.260671 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:04:58.260684 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:04:58.260696 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:04:58.260708 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:04:58.260720 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:04:58.260732 kernel: loop: module loaded Dec 13 01:04:58.260744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:04:58.260759 kernel: fuse: init (API version 7.39) Dec 13 01:04:58.260778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:04:58.260791 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:04:58.260803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:04:58.260816 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:04:58.260828 systemd[1]: Stopped verity-setup.service. Dec 13 01:04:58.260841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:58.260853 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:04:58.260866 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:04:58.260881 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:04:58.260894 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:04:58.260906 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:04:58.260918 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:04:58.260933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:04:58.260945 kernel: ACPI: bus type drm_connector registered Dec 13 01:04:58.260957 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:04:58.260969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:04:58.260999 systemd-journald[1116]: Collecting audit messages is disabled. Dec 13 01:04:58.261026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:04:58.261039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:04:58.261051 systemd-journald[1116]: Journal started Dec 13 01:04:58.261076 systemd-journald[1116]: Runtime Journal (/run/log/journal/2999669f8aa2480bb3d4306bb837eaa3) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:04:58.030546 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:04:58.045965 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:04:58.046442 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:04:58.265045 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:04:58.266394 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:04:58.266861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:04:58.268576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:04:58.268836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:04:58.270604 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:04:58.270854 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:04:58.272411 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:04:58.272673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:04:58.274252 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:04:58.275977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:04:58.277733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:04:58.279444 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:04:58.299496 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:04:58.308587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:04:58.311151 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:04:58.312324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:04:58.312353 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:04:58.314493 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:04:58.317264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:04:58.321591 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:04:58.323415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:04:58.326515 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:04:58.331955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:04:58.333776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:04:58.337696 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:04:58.339008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:04:58.347863 systemd-journald[1116]: Time spent on flushing to /var/log/journal/2999669f8aa2480bb3d4306bb837eaa3 is 30.824ms for 953 entries. Dec 13 01:04:58.347863 systemd-journald[1116]: System Journal (/var/log/journal/2999669f8aa2480bb3d4306bb837eaa3) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:04:58.387436 systemd-journald[1116]: Received client request to flush runtime journal. Dec 13 01:04:58.387486 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:04:58.349110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:04:58.352749 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:04:58.357596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:04:58.360513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:04:58.362069 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:04:58.365184 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:04:58.367199 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:04:58.373373 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:04:58.377078 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:04:58.385762 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:04:58.399716 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:04:58.401969 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:04:58.403848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:04:58.411570 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:04:58.415673 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 01:04:58.417673 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Dec 13 01:04:58.419210 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:04:58.429499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:04:58.436712 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:04:58.439014 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:04:58.440069 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:04:58.442577 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:04:58.467719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:04:58.472503 kernel: loop2: detected capacity change from 0 to 205544 Dec 13 01:04:58.478671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:04:58.498368 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:04:58.498392 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Dec 13 01:04:58.504485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:04:58.507551 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:04:58.520476 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:04:58.532506 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 01:04:58.538424 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:04:58.539201 (sd-merge)[1195]: Merged extensions into '/usr'. Dec 13 01:04:58.543549 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:04:58.543566 systemd[1]: Reloading... Dec 13 01:04:58.610921 zram_generator::config[1224]: No configuration found. Dec 13 01:04:58.683506 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:04:58.739603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:04:58.789622 systemd[1]: Reloading finished in 245 ms. Dec 13 01:04:58.825951 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:04:58.827625 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:04:58.839685 systemd[1]: Starting ensure-sysext.service... Dec 13 01:04:58.842177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:04:58.852829 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:04:58.852939 systemd[1]: Reloading... Dec 13 01:04:58.868091 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:04:58.868618 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:04:58.869688 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:04:58.870045 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 13 01:04:58.870157 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Dec 13 01:04:58.874609 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:04:58.874621 systemd-tmpfiles[1259]: Skipping /boot Dec 13 01:04:58.887371 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:04:58.887555 systemd-tmpfiles[1259]: Skipping /boot Dec 13 01:04:58.906485 zram_generator::config[1289]: No configuration found. Dec 13 01:04:59.024205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:04:59.083817 systemd[1]: Reloading finished in 230 ms. Dec 13 01:04:59.105753 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:04:59.118940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:04:59.127980 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:04:59.130657 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:04:59.133262 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:04:59.137020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:04:59.141719 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:04:59.144746 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:04:59.150327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.150513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:04:59.155589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:04:59.158690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:04:59.165671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:04:59.166836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:04:59.166928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.167829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:04:59.168126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:04:59.172080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:04:59.172926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:04:59.174867 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:04:59.175130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:04:59.175949 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Dec 13 01:04:59.180240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:04:59.186172 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:04:59.188665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.188967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:04:59.196666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:04:59.198248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:04:59.199990 augenrules[1355]: No rules Dec 13 01:04:59.202721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:04:59.203906 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:04:59.208709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:04:59.215759 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:04:59.216990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.218168 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:04:59.222442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:04:59.229828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:04:59.230017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:04:59.231881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:04:59.232073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:04:59.233948 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:04:59.234497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:04:59.236435 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:04:59.243918 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:04:59.254561 systemd[1]: Finished ensure-sysext.service. Dec 13 01:04:59.261402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.261805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:04:59.271195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:04:59.274654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:04:59.291465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1377) Dec 13 01:04:59.280622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:04:59.288621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:04:59.290062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:04:59.294883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:04:59.296709 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Dec 13 01:04:59.302604 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:04:59.305363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:04:59.305390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:04:59.305869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:04:59.307417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:04:59.307844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:04:59.309854 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:04:59.310506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:04:59.312947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:04:59.313214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:04:59.321479 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1363) Dec 13 01:04:59.320212 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:04:59.332537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:04:59.338821 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:04:59.339077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:04:59.340754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:04:59.357191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:04:59.367690 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:04:59.373219 systemd-resolved[1329]: Positive Trust Anchors: Dec 13 01:04:59.386806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:04:59.373241 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:04:59.373272 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:04:59.378668 systemd-resolved[1329]: Defaulting to hostname 'linux'. Dec 13 01:04:59.384425 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:04:59.387360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:04:59.401096 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:04:59.403570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:04:59.415729 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:04:59.416713 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:04:59.416936 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:04:59.425873 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:04:59.427288 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:04:59.432622 systemd-networkd[1399]: lo: Link UP Dec 13 01:04:59.432637 systemd-networkd[1399]: lo: Gained carrier Dec 13 01:04:59.434746 systemd-networkd[1399]: Enumeration completed Dec 13 01:04:59.434851 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:04:59.435275 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:04:59.435282 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:04:59.436154 systemd[1]: Reached target network.target - Network. Dec 13 01:04:59.437605 systemd-networkd[1399]: eth0: Link UP Dec 13 01:04:59.437612 systemd-networkd[1399]: eth0: Gained carrier Dec 13 01:04:59.437627 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:04:59.443602 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:04:59.457653 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:04:59.459406 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Dec 13 01:05:00.565363 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:05:00.565420 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2024-12-13 01:05:00.565233 UTC. Dec 13 01:05:00.566611 systemd-resolved[1329]: Clock change detected. Flushing caches. Dec 13 01:05:00.584604 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:05:00.624598 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:05:00.637989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:05:00.651088 kernel: kvm_amd: TSC scaling supported Dec 13 01:05:00.651124 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:05:00.651142 kernel: kvm_amd: Nested Paging enabled Dec 13 01:05:00.652066 kernel: kvm_amd: LBR virtualization supported Dec 13 01:05:00.652089 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:05:00.652623 kernel: kvm_amd: Virtual GIF supported Dec 13 01:05:00.675806 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:05:00.717147 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:05:00.737851 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:05:00.739493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:05:00.745758 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:05:00.777612 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:05:00.779159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:05:00.780309 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:05:00.781524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:05:00.782833 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:05:00.784327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:05:00.785552 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:05:00.786847 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:05:00.788263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:05:00.788294 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:05:00.789247 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:05:00.791062 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:05:00.793819 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:05:00.802085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:05:00.804469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:05:00.806033 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:05:00.807277 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:05:00.808260 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:05:00.809255 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:05:00.809285 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:05:00.810309 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:05:00.812464 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:05:00.815701 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:05:00.816432 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:05:00.819382 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:05:00.820527 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:05:00.822824 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:05:00.828297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:05:00.831878 jq[1436]: false Dec 13 01:05:00.832162 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:05:00.837960 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:05:00.845527 extend-filesystems[1437]: Found loop3 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found loop4 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found loop5 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found sr0 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda1 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda2 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda3 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found usr Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda4 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda6 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda7 Dec 13 01:05:00.845527 extend-filesystems[1437]: Found vda9 Dec 13 01:05:00.845527 extend-filesystems[1437]: Checking size of /dev/vda9 Dec 13 01:05:00.856680 dbus-daemon[1435]: [system] SELinux support is enabled Dec 13 01:05:00.847012 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:05:00.871264 extend-filesystems[1437]: Resized partition /dev/vda9 Dec 13 01:05:00.849024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:05:00.849562 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:05:00.853602 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:05:00.873453 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:05:00.869820 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:05:00.875775 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:05:00.878328 jq[1453]: true Dec 13 01:05:00.879138 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:05:00.883593 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:05:00.883625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1376) Dec 13 01:05:00.887081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:05:00.887318 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:05:00.887694 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:05:00.887925 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:05:00.893283 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:05:00.893515 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:05:00.911675 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:05:00.911726 update_engine[1451]: I20241213 01:05:00.909955 1451 main.cc:92] Flatcar Update Engine starting Dec 13 01:05:00.933395 update_engine[1451]: I20241213 01:05:00.916203 1451 update_check_scheduler.cc:74] Next update check in 10m30s Dec 13 01:05:00.924039 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:05:00.934174 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:05:00.934174 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:05:00.934174 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:05:00.933989 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:05:00.939178 jq[1461]: true Dec 13 01:05:00.939352 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Dec 13 01:05:00.934017 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:05:00.935111 systemd-logind[1445]: New seat seat0. Dec 13 01:05:00.935453 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:05:00.935706 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:05:00.938870 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:05:00.949910 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:05:00.950972 tar[1460]: linux-amd64/helm Dec 13 01:05:00.952907 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:05:00.955987 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:05:00.956148 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:05:00.958009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:05:00.958242 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:05:00.968304 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:05:00.970821 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:05:00.973145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:05:00.977477 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:05:01.003342 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:05:01.130996 containerd[1462]: time="2024-12-13T01:05:01.127012168Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:05:01.151808 containerd[1462]: time="2024-12-13T01:05:01.151713392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.153524 containerd[1462]: time="2024-12-13T01:05:01.153487749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:05:01.153524 containerd[1462]: time="2024-12-13T01:05:01.153516683Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:05:01.153596 containerd[1462]: time="2024-12-13T01:05:01.153532002Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:05:01.153734 containerd[1462]: time="2024-12-13T01:05:01.153706600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:05:01.153734 containerd[1462]: time="2024-12-13T01:05:01.153728941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.153818 containerd[1462]: time="2024-12-13T01:05:01.153799374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:05:01.153818 containerd[1462]: time="2024-12-13T01:05:01.153816035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154043 containerd[1462]: time="2024-12-13T01:05:01.154015068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154043 containerd[1462]: time="2024-12-13T01:05:01.154034845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154094 containerd[1462]: time="2024-12-13T01:05:01.154049603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154094 containerd[1462]: time="2024-12-13T01:05:01.154060684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154375 containerd[1462]: time="2024-12-13T01:05:01.154348994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154630 containerd[1462]: time="2024-12-13T01:05:01.154604303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154757 containerd[1462]: time="2024-12-13T01:05:01.154731542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:05:01.154757 containerd[1462]: time="2024-12-13T01:05:01.154750427Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:05:01.154878 containerd[1462]: time="2024-12-13T01:05:01.154845896Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:05:01.154941 containerd[1462]: time="2024-12-13T01:05:01.154924333Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:05:01.160104 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160749998Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160802086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160817916Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160833114Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160846770Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:05:01.160998 containerd[1462]: time="2024-12-13T01:05:01.160991131Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:05:01.161226 containerd[1462]: time="2024-12-13T01:05:01.161197367Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:05:01.161337 containerd[1462]: time="2024-12-13T01:05:01.161309007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:05:01.161337 containerd[1462]: time="2024-12-13T01:05:01.161331930Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:05:01.161377 containerd[1462]: time="2024-12-13T01:05:01.161345505Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:05:01.161377 containerd[1462]: time="2024-12-13T01:05:01.161360253Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161377 containerd[1462]: time="2024-12-13T01:05:01.161372836Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161436 containerd[1462]: time="2024-12-13T01:05:01.161385751Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161436 containerd[1462]: time="2024-12-13T01:05:01.161399857Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161436 containerd[1462]: time="2024-12-13T01:05:01.161415767Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161436 containerd[1462]: time="2024-12-13T01:05:01.161429142Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161503 containerd[1462]: time="2024-12-13T01:05:01.161441726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161503 containerd[1462]: time="2024-12-13T01:05:01.161454209Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:05:01.161503 containerd[1462]: time="2024-12-13T01:05:01.161474487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161503 containerd[1462]: time="2024-12-13T01:05:01.161489175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161503 containerd[1462]: time="2024-12-13T01:05:01.161501939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161515885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161528819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161541493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161553706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161566179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161599241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161613528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161627 containerd[1462]: time="2024-12-13T01:05:01.161626182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161640288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161653894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161673550Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161691755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161703477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161715219Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:05:01.161783 containerd[1462]: time="2024-12-13T01:05:01.161760193Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161794868Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161807021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161818502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161827920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161840463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161855572Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:05:01.161917 containerd[1462]: time="2024-12-13T01:05:01.161865330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:05:01.162203 containerd[1462]: time="2024-12-13T01:05:01.162120008Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:05:01.162203 containerd[1462]: time="2024-12-13T01:05:01.162176934Z" level=info msg="Connect containerd service" Dec 13 01:05:01.162347 containerd[1462]: time="2024-12-13T01:05:01.162210688Z" level=info msg="using legacy CRI server" Dec 13 01:05:01.162347 containerd[1462]: time="2024-12-13T01:05:01.162218222Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:05:01.162347 containerd[1462]: time="2024-12-13T01:05:01.162302891Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:05:01.163057 containerd[1462]: time="2024-12-13T01:05:01.162946668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:05:01.163186 containerd[1462]: time="2024-12-13T01:05:01.163146232Z" level=info msg="Start subscribing containerd event" Dec 13 01:05:01.163213 containerd[1462]: time="2024-12-13T01:05:01.163204812Z" level=info msg="Start recovering state" Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163260186Z" level=info msg="Start event monitor" Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163283339Z" level=info msg="Start snapshots syncer" Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163294039Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163301774Z" level=info msg="Start streaming server" Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163541363Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163609401Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:05:01.165764 containerd[1462]: time="2024-12-13T01:05:01.163661148Z" level=info msg="containerd successfully booted in 0.038520s" Dec 13 01:05:01.165048 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:05:01.185600 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:05:01.197697 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:05:01.199999 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:53106.service - OpenSSH per-connection server daemon (10.0.0.1:53106). Dec 13 01:05:01.217904 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:05:01.218172 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:05:01.221850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:05:01.246570 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:05:01.255904 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:05:01.258183 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:05:01.259448 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:05:01.273291 sshd[1516]: Accepted publickey for core from 10.0.0.1 port 53106 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:01.275518 sshd[1516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:01.283503 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:05:01.296822 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:05:01.299927 systemd-logind[1445]: New session 1 of user core. Dec 13 01:05:01.317185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:05:01.326836 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:05:01.334119 (systemd)[1527]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:05:01.423130 tar[1460]: linux-amd64/LICENSE Dec 13 01:05:01.423228 tar[1460]: linux-amd64/README.md Dec 13 01:05:01.470733 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:05:01.547407 systemd[1527]: Queued start job for default target default.target. Dec 13 01:05:01.561003 systemd[1527]: Created slice app.slice - User Application Slice. Dec 13 01:05:01.561030 systemd[1527]: Reached target paths.target - Paths. Dec 13 01:05:01.561045 systemd[1527]: Reached target timers.target - Timers. Dec 13 01:05:01.562686 systemd[1527]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:05:01.575396 systemd[1527]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:05:01.575531 systemd[1527]: Reached target sockets.target - Sockets. Dec 13 01:05:01.575552 systemd[1527]: Reached target basic.target - Basic System. Dec 13 01:05:01.575621 systemd[1527]: Reached target default.target - Main User Target. Dec 13 01:05:01.575667 systemd[1527]: Startup finished in 233ms. Dec 13 01:05:01.576210 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:05:01.579409 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:05:01.642690 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:53118.service - OpenSSH per-connection server daemon (10.0.0.1:53118). Dec 13 01:05:01.687934 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 53118 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:01.689846 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:01.694183 systemd-logind[1445]: New session 2 of user core. Dec 13 01:05:01.703707 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:05:01.762492 sshd[1541]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:01.780566 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:53118.service: Deactivated successfully. Dec 13 01:05:01.782291 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:05:01.783674 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:05:01.784925 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Dec 13 01:05:01.787272 systemd-logind[1445]: Removed session 2. Dec 13 01:05:01.844330 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:01.846416 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:01.850529 systemd-logind[1445]: New session 3 of user core. Dec 13 01:05:01.864700 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:05:01.921972 sshd[1548]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:01.926914 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:53128.service: Deactivated successfully. Dec 13 01:05:01.928925 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:05:01.929638 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:05:01.930619 systemd-logind[1445]: Removed session 3. Dec 13 01:05:02.449847 systemd-networkd[1399]: eth0: Gained IPv6LL Dec 13 01:05:02.453253 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:05:02.455237 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:05:02.468914 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:05:02.472001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:02.474718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:05:02.500945 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:05:02.502705 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:05:02.502934 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:05:02.505365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:05:03.498664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:03.500332 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:05:03.501670 systemd[1]: Startup finished in 926ms (kernel) + 5.761s (initrd) + 4.936s (userspace) = 11.624s. Dec 13 01:05:03.513620 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:05:04.111297 kubelet[1576]: E1213 01:05:04.111231 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:05:04.116512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:05:04.116729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:05:04.117137 systemd[1]: kubelet.service: Consumed 1.512s CPU time. Dec 13 01:05:11.934802 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:40238.service - OpenSSH per-connection server daemon (10.0.0.1:40238). Dec 13 01:05:11.974221 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 40238 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:11.975786 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:11.980320 systemd-logind[1445]: New session 4 of user core. Dec 13 01:05:11.990760 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:05:12.045347 sshd[1590]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:12.056364 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:40238.service: Deactivated successfully. Dec 13 01:05:12.058252 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:05:12.059935 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:05:12.061260 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). Dec 13 01:05:12.062071 systemd-logind[1445]: Removed session 4. Dec 13 01:05:12.100556 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:12.102036 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:12.105973 systemd-logind[1445]: New session 5 of user core. Dec 13 01:05:12.123713 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:05:12.172624 sshd[1597]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:12.193460 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:40250.service: Deactivated successfully. Dec 13 01:05:12.195378 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:05:12.197022 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:05:12.212815 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:40258.service - OpenSSH per-connection server daemon (10.0.0.1:40258). Dec 13 01:05:12.213624 systemd-logind[1445]: Removed session 5. Dec 13 01:05:12.248002 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 40258 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:12.249533 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:12.253570 systemd-logind[1445]: New session 6 of user core. Dec 13 01:05:12.263709 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:05:12.317913 sshd[1604]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:12.327484 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:40258.service: Deactivated successfully. Dec 13 01:05:12.329364 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:05:12.331003 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:05:12.332346 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). Dec 13 01:05:12.333129 systemd-logind[1445]: Removed session 6. Dec 13 01:05:12.380152 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:12.381669 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:12.385593 systemd-logind[1445]: New session 7 of user core. Dec 13 01:05:12.395704 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:05:12.454801 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:05:12.455168 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:05:12.472035 sudo[1614]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:12.474147 sshd[1611]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:12.485458 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:40260.service: Deactivated successfully. Dec 13 01:05:12.487392 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:05:12.489352 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:05:12.504948 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:40270.service - OpenSSH per-connection server daemon (10.0.0.1:40270). Dec 13 01:05:12.505858 systemd-logind[1445]: Removed session 7. Dec 13 01:05:12.538923 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 40270 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:12.540599 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:12.544756 systemd-logind[1445]: New session 8 of user core. Dec 13 01:05:12.563697 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:05:12.618463 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:05:12.618838 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:05:12.622598 sudo[1624]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:12.629320 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:05:12.629750 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:05:12.647818 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:05:12.649480 auditctl[1627]: No rules Dec 13 01:05:12.649912 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:05:12.650151 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:05:12.652948 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:05:12.685145 augenrules[1645]: No rules Dec 13 01:05:12.687148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:05:12.688474 sudo[1623]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:12.690448 sshd[1619]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:12.704449 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:40270.service: Deactivated successfully. Dec 13 01:05:12.706382 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:05:12.708203 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:05:12.721974 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:40284.service - OpenSSH per-connection server daemon (10.0.0.1:40284). Dec 13 01:05:12.722871 systemd-logind[1445]: Removed session 8. Dec 13 01:05:12.755967 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 40284 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:05:12.757538 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:05:12.761454 systemd-logind[1445]: New session 9 of user core. Dec 13 01:05:12.768785 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:05:12.822679 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:05:12.823042 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:05:13.106853 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:05:13.107031 (dockerd)[1674]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:05:13.382150 dockerd[1674]: time="2024-12-13T01:05:13.381970319Z" level=info msg="Starting up" Dec 13 01:05:13.470270 systemd[1]: var-lib-docker-metacopy\x2dcheck1138001033-merged.mount: Deactivated successfully. Dec 13 01:05:13.493762 dockerd[1674]: time="2024-12-13T01:05:13.493708247Z" level=info msg="Loading containers: start." Dec 13 01:05:13.609605 kernel: Initializing XFRM netlink socket Dec 13 01:05:13.682829 systemd-networkd[1399]: docker0: Link UP Dec 13 01:05:13.704159 dockerd[1674]: time="2024-12-13T01:05:13.704101500Z" level=info msg="Loading containers: done." Dec 13 01:05:13.718919 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck825484761-merged.mount: Deactivated successfully. Dec 13 01:05:13.723458 dockerd[1674]: time="2024-12-13T01:05:13.723415531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:05:13.723539 dockerd[1674]: time="2024-12-13T01:05:13.723524545Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:05:13.723694 dockerd[1674]: time="2024-12-13T01:05:13.723670930Z" level=info msg="Daemon has completed initialization" Dec 13 01:05:13.854154 dockerd[1674]: time="2024-12-13T01:05:13.854071858Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:05:13.854659 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:05:14.345323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:05:14.354743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:14.527156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:14.533132 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:05:14.565828 containerd[1462]: time="2024-12-13T01:05:14.565715373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:05:14.727335 kubelet[1829]: E1213 01:05:14.727114 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:05:14.733719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:05:14.733957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:05:15.378943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012122007.mount: Deactivated successfully. Dec 13 01:05:16.529497 containerd[1462]: time="2024-12-13T01:05:16.529424399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:16.530189 containerd[1462]: time="2024-12-13T01:05:16.530133649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 01:05:16.531418 containerd[1462]: time="2024-12-13T01:05:16.531385617Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:16.534192 containerd[1462]: time="2024-12-13T01:05:16.534138780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:16.535250 containerd[1462]: time="2024-12-13T01:05:16.535190913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.968828657s" Dec 13 01:05:16.535250 containerd[1462]: time="2024-12-13T01:05:16.535248271Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:05:16.536810 containerd[1462]: time="2024-12-13T01:05:16.536761899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:05:17.769895 containerd[1462]: time="2024-12-13T01:05:17.769820344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:17.770737 containerd[1462]: time="2024-12-13T01:05:17.770667643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 01:05:17.772047 containerd[1462]: time="2024-12-13T01:05:17.772009570Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:17.774857 containerd[1462]: time="2024-12-13T01:05:17.774794091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:17.775808 containerd[1462]: time="2024-12-13T01:05:17.775777265Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.238986863s" Dec 13 01:05:17.775808 containerd[1462]: time="2024-12-13T01:05:17.775809135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:05:17.776247 containerd[1462]: time="2024-12-13T01:05:17.776203966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:05:18.977615 containerd[1462]: time="2024-12-13T01:05:18.977541302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:18.979304 containerd[1462]: time="2024-12-13T01:05:18.979224518Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 01:05:18.980444 containerd[1462]: time="2024-12-13T01:05:18.980402668Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:18.983872 containerd[1462]: time="2024-12-13T01:05:18.983836928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:18.985011 containerd[1462]: time="2024-12-13T01:05:18.984969242Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.208733046s" Dec 13 01:05:18.985081 containerd[1462]: time="2024-12-13T01:05:18.985011341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:05:18.985649 containerd[1462]: time="2024-12-13T01:05:18.985602099Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:05:19.976166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408584726.mount: Deactivated successfully. Dec 13 01:05:20.687316 containerd[1462]: time="2024-12-13T01:05:20.687246189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:20.688043 containerd[1462]: time="2024-12-13T01:05:20.688005092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:05:20.689132 containerd[1462]: time="2024-12-13T01:05:20.689087762Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:20.691518 containerd[1462]: time="2024-12-13T01:05:20.691468768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:20.692685 containerd[1462]: time="2024-12-13T01:05:20.692631017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.707000025s" Dec 13 01:05:20.692755 containerd[1462]: time="2024-12-13T01:05:20.692691841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:05:20.693282 containerd[1462]: time="2024-12-13T01:05:20.693196979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:05:21.324379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208932237.mount: Deactivated successfully. Dec 13 01:05:22.153595 containerd[1462]: time="2024-12-13T01:05:22.153515407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.154337 containerd[1462]: time="2024-12-13T01:05:22.154245556Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:05:22.155615 containerd[1462]: time="2024-12-13T01:05:22.155553529Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.158171 containerd[1462]: time="2024-12-13T01:05:22.158135962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.160249 containerd[1462]: time="2024-12-13T01:05:22.160216173Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.466985181s" Dec 13 01:05:22.160298 containerd[1462]: time="2024-12-13T01:05:22.160252722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:05:22.160778 containerd[1462]: time="2024-12-13T01:05:22.160743252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:05:22.664972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630050349.mount: Deactivated successfully. Dec 13 01:05:22.764689 containerd[1462]: time="2024-12-13T01:05:22.764604407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.765543 containerd[1462]: time="2024-12-13T01:05:22.765463147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 01:05:22.766853 containerd[1462]: time="2024-12-13T01:05:22.766816796Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.769370 containerd[1462]: time="2024-12-13T01:05:22.769302337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:22.770261 containerd[1462]: time="2024-12-13T01:05:22.770210370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.369245ms" Dec 13 01:05:22.770261 containerd[1462]: time="2024-12-13T01:05:22.770244674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:05:22.770825 containerd[1462]: time="2024-12-13T01:05:22.770783114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:05:23.381277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187835603.mount: Deactivated successfully. Dec 13 01:05:24.845564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:05:24.861931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:25.078801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:25.085149 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:05:25.287405 kubelet[2018]: E1213 01:05:25.287140 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:05:25.291452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:05:25.291717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:05:25.891972 containerd[1462]: time="2024-12-13T01:05:25.891869706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:25.894030 containerd[1462]: time="2024-12-13T01:05:25.893987928Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 01:05:25.895617 containerd[1462]: time="2024-12-13T01:05:25.895519510Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:25.899180 containerd[1462]: time="2024-12-13T01:05:25.899122688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:25.900270 containerd[1462]: time="2024-12-13T01:05:25.900234062Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.129408558s" Dec 13 01:05:25.900270 containerd[1462]: time="2024-12-13T01:05:25.900268817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:05:28.082838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:28.099851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:28.125181 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-9.scope)... Dec 13 01:05:28.125197 systemd[1]: Reloading... Dec 13 01:05:28.220613 zram_generator::config[2100]: No configuration found. Dec 13 01:05:28.468121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:05:28.546542 systemd[1]: Reloading finished in 420 ms. Dec 13 01:05:28.600067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:28.604267 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:05:28.604553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:28.606758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:28.761763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:28.766500 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:05:28.810808 kubelet[2147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:28.810808 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:05:28.810808 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:28.810808 kubelet[2147]: I1213 01:05:28.810121 2147 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:05:28.996657 kubelet[2147]: I1213 01:05:28.996601 2147 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:05:28.996982 kubelet[2147]: I1213 01:05:28.996936 2147 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:05:28.997760 kubelet[2147]: I1213 01:05:28.997727 2147 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:05:29.020396 kubelet[2147]: E1213 01:05:29.020305 2147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:29.021883 kubelet[2147]: I1213 01:05:29.021838 2147 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:05:29.027889 kubelet[2147]: E1213 01:05:29.027853 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:05:29.027889 kubelet[2147]: I1213 01:05:29.027884 2147 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:05:29.034284 kubelet[2147]: I1213 01:05:29.034252 2147 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:05:29.035210 kubelet[2147]: I1213 01:05:29.035186 2147 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:05:29.035384 kubelet[2147]: I1213 01:05:29.035344 2147 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:05:29.035621 kubelet[2147]: I1213 01:05:29.035377 2147 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:05:29.035716 kubelet[2147]: I1213 01:05:29.035635 2147 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:05:29.035716 kubelet[2147]: I1213 01:05:29.035645 2147 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:05:29.035791 kubelet[2147]: I1213 01:05:29.035765 2147 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:29.037143 kubelet[2147]: I1213 01:05:29.037118 2147 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:05:29.037143 kubelet[2147]: I1213 01:05:29.037142 2147 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:05:29.037221 kubelet[2147]: I1213 01:05:29.037191 2147 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:05:29.037221 kubelet[2147]: I1213 01:05:29.037214 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:05:29.044964 kubelet[2147]: W1213 01:05:29.044914 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:29.045331 kubelet[2147]: E1213 01:05:29.045063 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:29.045331 kubelet[2147]: W1213 01:05:29.044924 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:29.045331 kubelet[2147]: E1213 01:05:29.045099 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:29.045331 kubelet[2147]: I1213 01:05:29.045179 2147 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:05:29.047781 kubelet[2147]: I1213 01:05:29.046888 2147 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:05:29.047781 kubelet[2147]: W1213 01:05:29.046988 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:05:29.048219 kubelet[2147]: I1213 01:05:29.048182 2147 server.go:1269] "Started kubelet" Dec 13 01:05:29.048627 kubelet[2147]: I1213 01:05:29.048594 2147 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:05:29.048787 kubelet[2147]: I1213 01:05:29.048732 2147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:05:29.049110 kubelet[2147]: I1213 01:05:29.049093 2147 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:05:29.051235 kubelet[2147]: I1213 01:05:29.051206 2147 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:05:29.054384 kubelet[2147]: I1213 01:05:29.054139 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:05:29.054384 kubelet[2147]: I1213 01:05:29.054215 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:05:29.055220 kubelet[2147]: I1213 01:05:29.055061 2147 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:05:29.055850 kubelet[2147]: E1213 01:05:29.055678 2147 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:05:29.056117 kubelet[2147]: E1213 01:05:29.054166 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810970ac4fef573 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:05:29.048151411 +0000 UTC m=+0.275830642,LastTimestamp:2024-12-13 01:05:29.048151411 +0000 UTC m=+0.275830642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:05:29.056309 kubelet[2147]: I1213 01:05:29.056203 2147 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:05:29.056309 kubelet[2147]: E1213 01:05:29.056267 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:05:29.056468 kubelet[2147]: W1213 01:05:29.056425 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:29.056510 kubelet[2147]: E1213 01:05:29.056478 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:29.057075 kubelet[2147]: E1213 01:05:29.056680 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Dec 13 01:05:29.057075 kubelet[2147]: I1213 01:05:29.056767 2147 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:05:29.057075 kubelet[2147]: I1213 01:05:29.056799 2147 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:05:29.057880 kubelet[2147]: I1213 01:05:29.057862 2147 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:05:29.057880 kubelet[2147]: I1213 01:05:29.057879 2147 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:05:29.073658 kubelet[2147]: I1213 01:05:29.073611 2147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:05:29.074604 kubelet[2147]: I1213 01:05:29.074240 2147 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:05:29.074604 kubelet[2147]: I1213 01:05:29.074257 2147 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:05:29.074604 kubelet[2147]: I1213 01:05:29.074276 2147 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:29.075047 kubelet[2147]: I1213 01:05:29.075018 2147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:05:29.075076 kubelet[2147]: I1213 01:05:29.075061 2147 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:05:29.075101 kubelet[2147]: I1213 01:05:29.075080 2147 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:05:29.075134 kubelet[2147]: E1213 01:05:29.075123 2147 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:05:29.080737 kubelet[2147]: W1213 01:05:29.080634 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:29.080737 kubelet[2147]: E1213 01:05:29.080701 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:29.157368 kubelet[2147]: E1213 01:05:29.157337 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:05:29.175639 kubelet[2147]: E1213 01:05:29.175606 2147 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:05:29.257284 kubelet[2147]: E1213 01:05:29.257251 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Dec 13 01:05:29.258375 kubelet[2147]: E1213 01:05:29.258321 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:05:29.275800 kubelet[2147]: I1213 01:05:29.275707 2147 policy_none.go:49] "None policy: Start" Dec 13 01:05:29.276802 kubelet[2147]: I1213 01:05:29.276755 2147 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:05:29.276856 kubelet[2147]: I1213 01:05:29.276821 2147 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:05:29.284222 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:05:29.297339 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:05:29.300211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:05:29.313964 kubelet[2147]: I1213 01:05:29.313913 2147 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:05:29.314251 kubelet[2147]: I1213 01:05:29.314225 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:05:29.314302 kubelet[2147]: I1213 01:05:29.314248 2147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:05:29.314558 kubelet[2147]: I1213 01:05:29.314496 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:05:29.315781 kubelet[2147]: E1213 01:05:29.315743 2147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:05:29.384819 systemd[1]: Created slice kubepods-burstable-podf6b9c666f8b74d171b62beeac4abc43b.slice - libcontainer container kubepods-burstable-podf6b9c666f8b74d171b62beeac4abc43b.slice. Dec 13 01:05:29.407597 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 01:05:29.411599 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 01:05:29.415986 kubelet[2147]: I1213 01:05:29.415930 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:29.416491 kubelet[2147]: E1213 01:05:29.416448 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Dec 13 01:05:29.460068 kubelet[2147]: I1213 01:05:29.459999 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:29.511194 kubelet[2147]: I1213 01:05:29.460067 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:29.511453 kubelet[2147]: I1213 01:05:29.511230 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:29.511453 kubelet[2147]: I1213 01:05:29.511254 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:29.511453 kubelet[2147]: I1213 01:05:29.511271 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:29.511453 kubelet[2147]: I1213 01:05:29.511286 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:29.511453 kubelet[2147]: I1213 01:05:29.511301 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:05:29.511569 kubelet[2147]: I1213 01:05:29.511315 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:29.511569 kubelet[2147]: I1213 01:05:29.511327 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:29.618513 kubelet[2147]: I1213 01:05:29.618433 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:29.618909 kubelet[2147]: E1213 01:05:29.618869 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Dec 13 01:05:29.658501 kubelet[2147]: E1213 01:05:29.658462 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Dec 13 01:05:29.706723 kubelet[2147]: E1213 01:05:29.706659 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:29.707309 containerd[1462]: time="2024-12-13T01:05:29.707266777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6b9c666f8b74d171b62beeac4abc43b,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:29.710693 kubelet[2147]: E1213 01:05:29.710655 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:29.711295 containerd[1462]: time="2024-12-13T01:05:29.711248845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:29.714651 kubelet[2147]: E1213 01:05:29.714621 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:29.715246 containerd[1462]: time="2024-12-13T01:05:29.715039343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:29.913924 kubelet[2147]: W1213 01:05:29.913806 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:29.913924 kubelet[2147]: E1213 01:05:29.913874 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:30.021019 kubelet[2147]: I1213 01:05:30.020987 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:30.021357 kubelet[2147]: E1213 01:05:30.021312 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Dec 13 01:05:30.229322 kubelet[2147]: W1213 01:05:30.229186 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:30.229322 kubelet[2147]: E1213 01:05:30.229243 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:30.357489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480584699.mount: Deactivated successfully. Dec 13 01:05:30.364716 kubelet[2147]: W1213 01:05:30.364659 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:30.364864 kubelet[2147]: E1213 01:05:30.364724 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:30.367305 containerd[1462]: time="2024-12-13T01:05:30.367254920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:30.369048 containerd[1462]: time="2024-12-13T01:05:30.369002307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:05:30.369861 containerd[1462]: time="2024-12-13T01:05:30.369835831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:30.370700 containerd[1462]: time="2024-12-13T01:05:30.370671357Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:30.371565 containerd[1462]: time="2024-12-13T01:05:30.371530118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:05:30.372437 containerd[1462]: time="2024-12-13T01:05:30.372401773Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:30.373303 containerd[1462]: time="2024-12-13T01:05:30.373272085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:05:30.375771 containerd[1462]: time="2024-12-13T01:05:30.375728081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:30.377295 containerd[1462]: time="2024-12-13T01:05:30.377260685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.928283ms" Dec 13 01:05:30.378115 containerd[1462]: time="2024-12-13T01:05:30.378077086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 670.737241ms" Dec 13 01:05:30.379591 containerd[1462]: time="2024-12-13T01:05:30.379549978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.453388ms" Dec 13 01:05:30.488193 kubelet[2147]: E1213 01:05:30.488040 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Dec 13 01:05:30.501049 kubelet[2147]: W1213 01:05:30.500944 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Dec 13 01:05:30.501049 kubelet[2147]: E1213 01:05:30.501019 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:30.598394 containerd[1462]: time="2024-12-13T01:05:30.598026389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:30.598394 containerd[1462]: time="2024-12-13T01:05:30.598082495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:30.598394 containerd[1462]: time="2024-12-13T01:05:30.598093465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.598394 containerd[1462]: time="2024-12-13T01:05:30.598176501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602904437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602982534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.603003814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.603131313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602010441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602063951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602074842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.603655 containerd[1462]: time="2024-12-13T01:05:30.602145444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.683832 systemd[1]: Started cri-containerd-2f8998934d4a8bd265306e4322ebaf95ee3993c29ae89378db2e0e21a88d7eba.scope - libcontainer container 2f8998934d4a8bd265306e4322ebaf95ee3993c29ae89378db2e0e21a88d7eba. Dec 13 01:05:30.685851 systemd[1]: Started cri-containerd-a0e7bba66eccbe8c2401f7f31daf375a78fbce90b45092113230cd57aa1ffc4c.scope - libcontainer container a0e7bba66eccbe8c2401f7f31daf375a78fbce90b45092113230cd57aa1ffc4c. Dec 13 01:05:30.689681 systemd[1]: Started cri-containerd-c97664d492ff44ec306e8514d89d4d6850d301a0325d3326f1a28f0d6300aadf.scope - libcontainer container c97664d492ff44ec306e8514d89d4d6850d301a0325d3326f1a28f0d6300aadf. Dec 13 01:05:30.772171 containerd[1462]: time="2024-12-13T01:05:30.760265205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0e7bba66eccbe8c2401f7f31daf375a78fbce90b45092113230cd57aa1ffc4c\"" Dec 13 01:05:30.772171 containerd[1462]: time="2024-12-13T01:05:30.763392069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"c97664d492ff44ec306e8514d89d4d6850d301a0325d3326f1a28f0d6300aadf\"" Dec 13 01:05:30.772171 containerd[1462]: time="2024-12-13T01:05:30.768896362Z" level=info msg="CreateContainer within sandbox \"a0e7bba66eccbe8c2401f7f31daf375a78fbce90b45092113230cd57aa1ffc4c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:05:30.772171 containerd[1462]: time="2024-12-13T01:05:30.771832929Z" level=info msg="CreateContainer within sandbox \"c97664d492ff44ec306e8514d89d4d6850d301a0325d3326f1a28f0d6300aadf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:05:30.772801 kubelet[2147]: E1213 01:05:30.765441 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:30.772801 kubelet[2147]: E1213 01:05:30.766746 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:30.804358 containerd[1462]: time="2024-12-13T01:05:30.804312529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6b9c666f8b74d171b62beeac4abc43b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8998934d4a8bd265306e4322ebaf95ee3993c29ae89378db2e0e21a88d7eba\"" Dec 13 01:05:30.805226 kubelet[2147]: E1213 01:05:30.805200 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:30.806694 containerd[1462]: time="2024-12-13T01:05:30.806666554Z" level=info msg="CreateContainer within sandbox \"2f8998934d4a8bd265306e4322ebaf95ee3993c29ae89378db2e0e21a88d7eba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:05:30.822783 kubelet[2147]: I1213 01:05:30.822731 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:30.823098 kubelet[2147]: E1213 01:05:30.823066 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Dec 13 01:05:30.984162 containerd[1462]: time="2024-12-13T01:05:30.984084115Z" level=info msg="CreateContainer within sandbox \"c97664d492ff44ec306e8514d89d4d6850d301a0325d3326f1a28f0d6300aadf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2976a91812b7237c8b7acb37cd2e5e4b66f7a4ef9a4c24925df64b6ec939f067\"" Dec 13 01:05:30.985048 containerd[1462]: time="2024-12-13T01:05:30.984995505Z" level=info msg="StartContainer for \"2976a91812b7237c8b7acb37cd2e5e4b66f7a4ef9a4c24925df64b6ec939f067\"" Dec 13 01:05:30.989446 containerd[1462]: time="2024-12-13T01:05:30.989390096Z" level=info msg="CreateContainer within sandbox \"a0e7bba66eccbe8c2401f7f31daf375a78fbce90b45092113230cd57aa1ffc4c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4fb62275c9b63a4ac50dca23c04fa5c120a3f8bd189537b5631871c82f674a29\"" Dec 13 01:05:30.990996 containerd[1462]: time="2024-12-13T01:05:30.989980513Z" level=info msg="StartContainer for \"4fb62275c9b63a4ac50dca23c04fa5c120a3f8bd189537b5631871c82f674a29\"" Dec 13 01:05:30.991465 containerd[1462]: time="2024-12-13T01:05:30.991419852Z" level=info msg="CreateContainer within sandbox \"2f8998934d4a8bd265306e4322ebaf95ee3993c29ae89378db2e0e21a88d7eba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26774053f1b90936f66455feb5f221e33260929cbcd69cd8b4bdddb74161bd9a\"" Dec 13 01:05:30.992003 containerd[1462]: time="2024-12-13T01:05:30.991930240Z" level=info msg="StartContainer for \"26774053f1b90936f66455feb5f221e33260929cbcd69cd8b4bdddb74161bd9a\"" Dec 13 01:05:31.017772 systemd[1]: Started cri-containerd-2976a91812b7237c8b7acb37cd2e5e4b66f7a4ef9a4c24925df64b6ec939f067.scope - libcontainer container 2976a91812b7237c8b7acb37cd2e5e4b66f7a4ef9a4c24925df64b6ec939f067. Dec 13 01:05:31.022489 systemd[1]: Started cri-containerd-26774053f1b90936f66455feb5f221e33260929cbcd69cd8b4bdddb74161bd9a.scope - libcontainer container 26774053f1b90936f66455feb5f221e33260929cbcd69cd8b4bdddb74161bd9a. Dec 13 01:05:31.023947 systemd[1]: Started cri-containerd-4fb62275c9b63a4ac50dca23c04fa5c120a3f8bd189537b5631871c82f674a29.scope - libcontainer container 4fb62275c9b63a4ac50dca23c04fa5c120a3f8bd189537b5631871c82f674a29. Dec 13 01:05:31.052341 kubelet[2147]: E1213 01:05:31.052284 2147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:05:31.076549 containerd[1462]: time="2024-12-13T01:05:31.076496542Z" level=info msg="StartContainer for \"2976a91812b7237c8b7acb37cd2e5e4b66f7a4ef9a4c24925df64b6ec939f067\" returns successfully" Dec 13 01:05:31.076850 containerd[1462]: time="2024-12-13T01:05:31.076662894Z" level=info msg="StartContainer for \"26774053f1b90936f66455feb5f221e33260929cbcd69cd8b4bdddb74161bd9a\" returns successfully" Dec 13 01:05:31.087202 kubelet[2147]: E1213 01:05:31.087167 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:31.089509 containerd[1462]: time="2024-12-13T01:05:31.089180939Z" level=info msg="StartContainer for \"4fb62275c9b63a4ac50dca23c04fa5c120a3f8bd189537b5631871c82f674a29\" returns successfully" Dec 13 01:05:31.092629 kubelet[2147]: E1213 01:05:31.092418 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:31.095775 kubelet[2147]: E1213 01:05:31.095700 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:32.098547 kubelet[2147]: E1213 01:05:32.098515 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:32.098958 kubelet[2147]: E1213 01:05:32.098684 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:32.392112 kubelet[2147]: E1213 01:05:32.391937 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:05:32.424850 kubelet[2147]: I1213 01:05:32.424806 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:32.544764 kubelet[2147]: I1213 01:05:32.544495 2147 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:05:32.544764 kubelet[2147]: E1213 01:05:32.544553 2147 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 01:05:32.551630 kubelet[2147]: E1213 01:05:32.551588 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:05:33.038195 kubelet[2147]: I1213 01:05:33.038149 2147 apiserver.go:52] "Watching apiserver" Dec 13 01:05:33.057111 kubelet[2147]: I1213 01:05:33.057084 2147 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:05:34.205917 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-9.scope)... Dec 13 01:05:34.205933 systemd[1]: Reloading... Dec 13 01:05:34.298613 zram_generator::config[2473]: No configuration found. Dec 13 01:05:34.408500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:05:34.502404 systemd[1]: Reloading finished in 296 ms. Dec 13 01:05:34.550419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:34.569271 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:05:34.569535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:34.579833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:34.734919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:34.740236 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:05:34.784402 kubelet[2518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:34.784402 kubelet[2518]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:05:34.784402 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:34.784402 kubelet[2518]: I1213 01:05:34.784104 2518 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:05:34.790407 kubelet[2518]: I1213 01:05:34.790375 2518 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:05:34.790407 kubelet[2518]: I1213 01:05:34.790397 2518 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:05:34.790602 kubelet[2518]: I1213 01:05:34.790571 2518 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:05:34.791870 kubelet[2518]: I1213 01:05:34.791846 2518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:05:34.793675 kubelet[2518]: I1213 01:05:34.793632 2518 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:05:34.799451 kubelet[2518]: E1213 01:05:34.799399 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:05:34.799451 kubelet[2518]: I1213 01:05:34.799447 2518 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:05:34.805621 kubelet[2518]: I1213 01:05:34.805590 2518 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:05:34.805810 kubelet[2518]: I1213 01:05:34.805788 2518 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:05:34.805989 kubelet[2518]: I1213 01:05:34.805945 2518 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:05:34.806167 kubelet[2518]: I1213 01:05:34.805984 2518 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:05:34.806245 kubelet[2518]: I1213 01:05:34.806171 2518 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:05:34.806245 kubelet[2518]: I1213 01:05:34.806180 2518 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:05:34.806245 kubelet[2518]: I1213 01:05:34.806221 2518 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:34.806384 kubelet[2518]: I1213 01:05:34.806368 2518 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:05:34.806407 kubelet[2518]: I1213 01:05:34.806389 2518 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:05:34.806452 kubelet[2518]: I1213 01:05:34.806438 2518 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:05:34.806474 kubelet[2518]: I1213 01:05:34.806463 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:05:34.807428 kubelet[2518]: I1213 01:05:34.807373 2518 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:05:34.807854 kubelet[2518]: I1213 01:05:34.807838 2518 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:05:34.810230 kubelet[2518]: I1213 01:05:34.808339 2518 server.go:1269] "Started kubelet" Dec 13 01:05:34.810619 kubelet[2518]: I1213 01:05:34.810342 2518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:05:34.811105 kubelet[2518]: I1213 01:05:34.811071 2518 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:05:34.812897 kubelet[2518]: I1213 01:05:34.812855 2518 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:05:34.814014 kubelet[2518]: I1213 01:05:34.813977 2518 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:05:34.815126 kubelet[2518]: I1213 01:05:34.815094 2518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:05:34.819102 kubelet[2518]: I1213 01:05:34.819084 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:05:34.821794 kubelet[2518]: I1213 01:05:34.821762 2518 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:05:34.821963 kubelet[2518]: I1213 01:05:34.821929 2518 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:05:34.825830 kubelet[2518]: I1213 01:05:34.825808 2518 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:05:34.826012 kubelet[2518]: I1213 01:05:34.825999 2518 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:05:34.826391 kubelet[2518]: E1213 01:05:34.826186 2518 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:05:34.826791 kubelet[2518]: I1213 01:05:34.826739 2518 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:05:34.827608 kubelet[2518]: I1213 01:05:34.827553 2518 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:05:34.832383 kubelet[2518]: E1213 01:05:34.832349 2518 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:05:34.843554 kubelet[2518]: I1213 01:05:34.843507 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:05:34.845994 kubelet[2518]: I1213 01:05:34.845971 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:05:34.846385 kubelet[2518]: I1213 01:05:34.846371 2518 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:05:34.846453 kubelet[2518]: I1213 01:05:34.846443 2518 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:05:34.846570 kubelet[2518]: E1213 01:05:34.846539 2518 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:05:34.874598 kubelet[2518]: I1213 01:05:34.874528 2518 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:05:34.874598 kubelet[2518]: I1213 01:05:34.874556 2518 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:05:34.874730 kubelet[2518]: I1213 01:05:34.874625 2518 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:34.874820 kubelet[2518]: I1213 01:05:34.874794 2518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:05:34.874862 kubelet[2518]: I1213 01:05:34.874815 2518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:05:34.874862 kubelet[2518]: I1213 01:05:34.874843 2518 policy_none.go:49] "None policy: Start" Dec 13 01:05:34.875544 kubelet[2518]: I1213 01:05:34.875516 2518 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:05:34.875615 kubelet[2518]: I1213 01:05:34.875547 2518 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:05:34.875726 kubelet[2518]: I1213 01:05:34.875702 2518 state_mem.go:75] "Updated machine memory state" Dec 13 01:05:34.886130 kubelet[2518]: I1213 01:05:34.886102 2518 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:05:34.886623 kubelet[2518]: I1213 01:05:34.886291 2518 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:05:34.886623 kubelet[2518]: I1213 01:05:34.886311 2518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:05:34.886623 kubelet[2518]: I1213 01:05:34.886623 2518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:05:34.992844 kubelet[2518]: I1213 01:05:34.992796 2518 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 01:05:35.002167 kubelet[2518]: I1213 01:05:35.002130 2518 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 01:05:35.002254 kubelet[2518]: I1213 01:05:35.002239 2518 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 01:05:35.027464 kubelet[2518]: I1213 01:05:35.027412 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:35.027464 kubelet[2518]: I1213 01:05:35.027452 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:35.027464 kubelet[2518]: I1213 01:05:35.027473 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:35.027732 kubelet[2518]: I1213 01:05:35.027488 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:35.027732 kubelet[2518]: I1213 01:05:35.027504 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:35.027732 kubelet[2518]: I1213 01:05:35.027629 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:35.027732 kubelet[2518]: I1213 01:05:35.027681 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6b9c666f8b74d171b62beeac4abc43b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6b9c666f8b74d171b62beeac4abc43b\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:35.027732 kubelet[2518]: I1213 01:05:35.027713 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:05:35.027853 kubelet[2518]: I1213 01:05:35.027740 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:05:35.214855 sudo[2554]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:05:35.215338 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:05:35.256518 kubelet[2518]: E1213 01:05:35.256455 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.257939 kubelet[2518]: E1213 01:05:35.257608 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.257939 kubelet[2518]: E1213 01:05:35.257813 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.747312 sudo[2554]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:35.807331 kubelet[2518]: I1213 01:05:35.807247 2518 apiserver.go:52] "Watching apiserver" Dec 13 01:05:35.827243 kubelet[2518]: I1213 01:05:35.827189 2518 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:05:35.859226 kubelet[2518]: E1213 01:05:35.859181 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.859763 kubelet[2518]: E1213 01:05:35.859726 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.880875 kubelet[2518]: E1213 01:05:35.880826 2518 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:05:35.881192 kubelet[2518]: E1213 01:05:35.881117 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:35.961658 kubelet[2518]: I1213 01:05:35.961588 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.961546767 podStartE2EDuration="1.961546767s" podCreationTimestamp="2024-12-13 01:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:35.948873726 +0000 UTC m=+1.204843970" watchObservedRunningTime="2024-12-13 01:05:35.961546767 +0000 UTC m=+1.217517011" Dec 13 01:05:35.984597 kubelet[2518]: I1213 01:05:35.983057 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.983034389 podStartE2EDuration="1.983034389s" podCreationTimestamp="2024-12-13 01:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:35.963902549 +0000 UTC m=+1.219872803" watchObservedRunningTime="2024-12-13 01:05:35.983034389 +0000 UTC m=+1.239004634" Dec 13 01:05:36.859854 kubelet[2518]: E1213 01:05:36.859806 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:36.859854 kubelet[2518]: E1213 01:05:36.859840 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:37.580912 sudo[1656]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:37.583717 sshd[1653]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:37.589140 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:40284.service: Deactivated successfully. Dec 13 01:05:37.592127 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:05:37.592408 systemd[1]: session-9.scope: Consumed 4.736s CPU time, 157.5M memory peak, 0B memory swap peak. Dec 13 01:05:37.593285 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:05:37.594386 systemd-logind[1445]: Removed session 9. Dec 13 01:05:37.757737 kubelet[2518]: E1213 01:05:37.757682 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:37.861633 kubelet[2518]: E1213 01:05:37.861556 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:38.862806 kubelet[2518]: E1213 01:05:38.862753 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:39.514595 kubelet[2518]: I1213 01:05:39.514549 2518 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:05:39.514910 containerd[1462]: time="2024-12-13T01:05:39.514871067Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:05:39.515290 kubelet[2518]: I1213 01:05:39.515053 2518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:05:40.036829 kubelet[2518]: I1213 01:05:40.036737 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.036717025 podStartE2EDuration="6.036717025s" podCreationTimestamp="2024-12-13 01:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:35.987511862 +0000 UTC m=+1.243482106" watchObservedRunningTime="2024-12-13 01:05:40.036717025 +0000 UTC m=+5.292687269" Dec 13 01:05:40.051245 systemd[1]: Created slice kubepods-besteffort-pod1da8e76f_8fda_417b_bea9_e8a916350d52.slice - libcontainer container kubepods-besteffort-pod1da8e76f_8fda_417b_bea9_e8a916350d52.slice. Dec 13 01:05:40.066486 systemd[1]: Created slice kubepods-burstable-podbe21cd31_2c6b_4e29_8fd5_8ae01c860506.slice - libcontainer container kubepods-burstable-podbe21cd31_2c6b_4e29_8fd5_8ae01c860506.slice. Dec 13 01:05:40.159029 kubelet[2518]: I1213 01:05:40.158978 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-etc-cni-netd\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159029 kubelet[2518]: I1213 01:05:40.159023 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-net\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159076 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1da8e76f-8fda-417b-bea9-e8a916350d52-kube-proxy\") pod \"kube-proxy-l7zh2\" (UID: \"1da8e76f-8fda-417b-bea9-e8a916350d52\") " pod="kube-system/kube-proxy-l7zh2" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159094 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-bpf-maps\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159112 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-cgroup\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159128 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-lib-modules\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159143 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da8e76f-8fda-417b-bea9-e8a916350d52-xtables-lock\") pod \"kube-proxy-l7zh2\" (UID: \"1da8e76f-8fda-417b-bea9-e8a916350d52\") " pod="kube-system/kube-proxy-l7zh2" Dec 13 01:05:40.159208 kubelet[2518]: I1213 01:05:40.159158 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da8e76f-8fda-417b-bea9-e8a916350d52-lib-modules\") pod \"kube-proxy-l7zh2\" (UID: \"1da8e76f-8fda-417b-bea9-e8a916350d52\") " pod="kube-system/kube-proxy-l7zh2" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159179 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rz87\" (UniqueName: \"kubernetes.io/projected/1da8e76f-8fda-417b-bea9-e8a916350d52-kube-api-access-4rz87\") pod \"kube-proxy-l7zh2\" (UID: \"1da8e76f-8fda-417b-bea9-e8a916350d52\") " pod="kube-system/kube-proxy-l7zh2" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159196 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be21cd31-2c6b-4e29-8fd5-8ae01c860506-clustermesh-secrets\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159247 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hubble-tls\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159317 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhwd\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-kube-api-access-lhhwd\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159335 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hostproc\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159404 kubelet[2518]: I1213 01:05:40.159350 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-run\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159647 kubelet[2518]: I1213 01:05:40.159370 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cni-path\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159647 kubelet[2518]: I1213 01:05:40.159399 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-config-path\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159647 kubelet[2518]: I1213 01:05:40.159421 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-kernel\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.159647 kubelet[2518]: I1213 01:05:40.159439 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-xtables-lock\") pod \"cilium-9smlw\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " pod="kube-system/cilium-9smlw" Dec 13 01:05:40.363770 kubelet[2518]: E1213 01:05:40.363729 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.364413 containerd[1462]: time="2024-12-13T01:05:40.364378297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7zh2,Uid:1da8e76f-8fda-417b-bea9-e8a916350d52,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:40.368861 kubelet[2518]: E1213 01:05:40.368839 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.369347 containerd[1462]: time="2024-12-13T01:05:40.369293876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9smlw,Uid:be21cd31-2c6b-4e29-8fd5-8ae01c860506,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:40.493216 systemd[1]: Created slice kubepods-besteffort-pod59efbc1a_670f_4b59_a713_9a29c54a33d1.slice - libcontainer container kubepods-besteffort-pod59efbc1a_670f_4b59_a713_9a29c54a33d1.slice. Dec 13 01:05:40.538036 containerd[1462]: time="2024-12-13T01:05:40.537073244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:40.538036 containerd[1462]: time="2024-12-13T01:05:40.537160007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:40.538036 containerd[1462]: time="2024-12-13T01:05:40.537178481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.538036 containerd[1462]: time="2024-12-13T01:05:40.537312954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.538634 containerd[1462]: time="2024-12-13T01:05:40.537991780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:40.538944 containerd[1462]: time="2024-12-13T01:05:40.538913663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:40.539087 containerd[1462]: time="2024-12-13T01:05:40.539064276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.543647 containerd[1462]: time="2024-12-13T01:05:40.541616864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.569878 kubelet[2518]: I1213 01:05:40.569740 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59efbc1a-670f-4b59-a713-9a29c54a33d1-cilium-config-path\") pod \"cilium-operator-5d85765b45-2n4k8\" (UID: \"59efbc1a-670f-4b59-a713-9a29c54a33d1\") " pod="kube-system/cilium-operator-5d85765b45-2n4k8" Dec 13 01:05:40.569878 kubelet[2518]: I1213 01:05:40.569792 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz486\" (UniqueName: \"kubernetes.io/projected/59efbc1a-670f-4b59-a713-9a29c54a33d1-kube-api-access-kz486\") pod \"cilium-operator-5d85765b45-2n4k8\" (UID: \"59efbc1a-670f-4b59-a713-9a29c54a33d1\") " pod="kube-system/cilium-operator-5d85765b45-2n4k8" Dec 13 01:05:40.575753 systemd[1]: Started cri-containerd-1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662.scope - libcontainer container 1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662. Dec 13 01:05:40.578313 systemd[1]: Started cri-containerd-74e42729a6c27dcbf7abf678900b8313552f7bf89d23166f3958b4f10517f01b.scope - libcontainer container 74e42729a6c27dcbf7abf678900b8313552f7bf89d23166f3958b4f10517f01b. Dec 13 01:05:40.606206 containerd[1462]: time="2024-12-13T01:05:40.606152235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9smlw,Uid:be21cd31-2c6b-4e29-8fd5-8ae01c860506,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\"" Dec 13 01:05:40.607268 kubelet[2518]: E1213 01:05:40.607240 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.609660 containerd[1462]: time="2024-12-13T01:05:40.609450575Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:05:40.611395 containerd[1462]: time="2024-12-13T01:05:40.611297145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7zh2,Uid:1da8e76f-8fda-417b-bea9-e8a916350d52,Namespace:kube-system,Attempt:0,} returns sandbox id \"74e42729a6c27dcbf7abf678900b8313552f7bf89d23166f3958b4f10517f01b\"" Dec 13 01:05:40.612512 kubelet[2518]: E1213 01:05:40.612488 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.614960 containerd[1462]: time="2024-12-13T01:05:40.614798496Z" level=info msg="CreateContainer within sandbox \"74e42729a6c27dcbf7abf678900b8313552f7bf89d23166f3958b4f10517f01b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:05:40.636457 containerd[1462]: time="2024-12-13T01:05:40.636408905Z" level=info msg="CreateContainer within sandbox \"74e42729a6c27dcbf7abf678900b8313552f7bf89d23166f3958b4f10517f01b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb097a6e12dcf7fc5e06d39f1370ae5b91614c84c7ee75499f7c3859c641cf98\"" Dec 13 01:05:40.637156 containerd[1462]: time="2024-12-13T01:05:40.637036545Z" level=info msg="StartContainer for \"eb097a6e12dcf7fc5e06d39f1370ae5b91614c84c7ee75499f7c3859c641cf98\"" Dec 13 01:05:40.664728 systemd[1]: Started cri-containerd-eb097a6e12dcf7fc5e06d39f1370ae5b91614c84c7ee75499f7c3859c641cf98.scope - libcontainer container eb097a6e12dcf7fc5e06d39f1370ae5b91614c84c7ee75499f7c3859c641cf98. Dec 13 01:05:40.698076 containerd[1462]: time="2024-12-13T01:05:40.698027113Z" level=info msg="StartContainer for \"eb097a6e12dcf7fc5e06d39f1370ae5b91614c84c7ee75499f7c3859c641cf98\" returns successfully" Dec 13 01:05:40.805835 kubelet[2518]: E1213 01:05:40.805791 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.807029 containerd[1462]: time="2024-12-13T01:05:40.806940005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2n4k8,Uid:59efbc1a-670f-4b59-a713-9a29c54a33d1,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:40.836724 containerd[1462]: time="2024-12-13T01:05:40.836542967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:40.837039 containerd[1462]: time="2024-12-13T01:05:40.836862938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:40.837039 containerd[1462]: time="2024-12-13T01:05:40.836876203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.837039 containerd[1462]: time="2024-12-13T01:05:40.836975038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:40.862809 systemd[1]: Started cri-containerd-2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243.scope - libcontainer container 2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243. Dec 13 01:05:40.868346 kubelet[2518]: E1213 01:05:40.868035 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:40.877337 kubelet[2518]: I1213 01:05:40.877120 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7zh2" podStartSLOduration=0.877104216 podStartE2EDuration="877.104216ms" podCreationTimestamp="2024-12-13 01:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:40.876503055 +0000 UTC m=+6.132473289" watchObservedRunningTime="2024-12-13 01:05:40.877104216 +0000 UTC m=+6.133074460" Dec 13 01:05:40.906021 containerd[1462]: time="2024-12-13T01:05:40.905968148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2n4k8,Uid:59efbc1a-670f-4b59-a713-9a29c54a33d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\"" Dec 13 01:05:40.906992 kubelet[2518]: E1213 01:05:40.906958 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:45.882434 update_engine[1451]: I20241213 01:05:45.882342 1451 update_attempter.cc:509] Updating boot flags... Dec 13 01:05:46.053618 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2897) Dec 13 01:05:46.109617 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2901) Dec 13 01:05:46.140024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2901) Dec 13 01:05:46.434692 kubelet[2518]: E1213 01:05:46.434639 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:47.769966 kubelet[2518]: E1213 01:05:47.769908 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:48.159975 kubelet[2518]: E1213 01:05:48.159926 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:48.883306 kubelet[2518]: E1213 01:05:48.883271 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:48.936393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128389237.mount: Deactivated successfully. Dec 13 01:05:52.696073 containerd[1462]: time="2024-12-13T01:05:52.696005456Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:52.696786 containerd[1462]: time="2024-12-13T01:05:52.696738812Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735307" Dec 13 01:05:52.698417 containerd[1462]: time="2024-12-13T01:05:52.698384461Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:52.699926 containerd[1462]: time="2024-12-13T01:05:52.699892553Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.090402764s" Dec 13 01:05:52.699964 containerd[1462]: time="2024-12-13T01:05:52.699923972Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:05:52.705712 containerd[1462]: time="2024-12-13T01:05:52.705683956Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:05:52.718554 containerd[1462]: time="2024-12-13T01:05:52.718508241Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:05:52.731941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464760833.mount: Deactivated successfully. Dec 13 01:05:52.733210 containerd[1462]: time="2024-12-13T01:05:52.733163973Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\"" Dec 13 01:05:52.735367 containerd[1462]: time="2024-12-13T01:05:52.735308039Z" level=info msg="StartContainer for \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\"" Dec 13 01:05:52.768746 systemd[1]: Started cri-containerd-61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a.scope - libcontainer container 61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a. Dec 13 01:05:52.799281 containerd[1462]: time="2024-12-13T01:05:52.799225612Z" level=info msg="StartContainer for \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\" returns successfully" Dec 13 01:05:52.809967 systemd[1]: cri-containerd-61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a.scope: Deactivated successfully. Dec 13 01:05:53.130980 kubelet[2518]: E1213 01:05:53.130925 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:53.318499 containerd[1462]: time="2024-12-13T01:05:53.315976866Z" level=info msg="shim disconnected" id=61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a namespace=k8s.io Dec 13 01:05:53.318499 containerd[1462]: time="2024-12-13T01:05:53.318480306Z" level=warning msg="cleaning up after shim disconnected" id=61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a namespace=k8s.io Dec 13 01:05:53.318499 containerd[1462]: time="2024-12-13T01:05:53.318489654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:53.729197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a-rootfs.mount: Deactivated successfully. Dec 13 01:05:54.134072 kubelet[2518]: E1213 01:05:54.134031 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:54.137141 containerd[1462]: time="2024-12-13T01:05:54.137091571Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:05:54.152377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805486853.mount: Deactivated successfully. Dec 13 01:05:54.156228 containerd[1462]: time="2024-12-13T01:05:54.156180633Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\"" Dec 13 01:05:54.156894 containerd[1462]: time="2024-12-13T01:05:54.156851241Z" level=info msg="StartContainer for \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\"" Dec 13 01:05:54.196760 systemd[1]: Started cri-containerd-0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516.scope - libcontainer container 0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516. Dec 13 01:05:54.223020 containerd[1462]: time="2024-12-13T01:05:54.222957267Z" level=info msg="StartContainer for \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\" returns successfully" Dec 13 01:05:54.234455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:05:54.234715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:05:54.234799 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:05:54.239858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:05:54.240046 systemd[1]: cri-containerd-0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516.scope: Deactivated successfully. Dec 13 01:05:54.263520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:05:54.267984 containerd[1462]: time="2024-12-13T01:05:54.267921739Z" level=info msg="shim disconnected" id=0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516 namespace=k8s.io Dec 13 01:05:54.267984 containerd[1462]: time="2024-12-13T01:05:54.267982683Z" level=warning msg="cleaning up after shim disconnected" id=0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516 namespace=k8s.io Dec 13 01:05:54.268108 containerd[1462]: time="2024-12-13T01:05:54.267993754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:54.729273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516-rootfs.mount: Deactivated successfully. Dec 13 01:05:55.136903 kubelet[2518]: E1213 01:05:55.136865 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:55.138338 containerd[1462]: time="2024-12-13T01:05:55.138264563Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:05:55.159420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794068028.mount: Deactivated successfully. Dec 13 01:05:55.167951 containerd[1462]: time="2024-12-13T01:05:55.167897113Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\"" Dec 13 01:05:55.168567 containerd[1462]: time="2024-12-13T01:05:55.168416958Z" level=info msg="StartContainer for \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\"" Dec 13 01:05:55.198761 systemd[1]: Started cri-containerd-3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182.scope - libcontainer container 3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182. Dec 13 01:05:55.228995 containerd[1462]: time="2024-12-13T01:05:55.228959595Z" level=info msg="StartContainer for \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\" returns successfully" Dec 13 01:05:55.229770 systemd[1]: cri-containerd-3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182.scope: Deactivated successfully. Dec 13 01:05:55.300261 containerd[1462]: time="2024-12-13T01:05:55.300195552Z" level=info msg="shim disconnected" id=3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182 namespace=k8s.io Dec 13 01:05:55.300261 containerd[1462]: time="2024-12-13T01:05:55.300252519Z" level=warning msg="cleaning up after shim disconnected" id=3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182 namespace=k8s.io Dec 13 01:05:55.300261 containerd[1462]: time="2024-12-13T01:05:55.300261847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:55.729364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182-rootfs.mount: Deactivated successfully. Dec 13 01:05:56.140119 kubelet[2518]: E1213 01:05:56.140087 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:56.143020 containerd[1462]: time="2024-12-13T01:05:56.142825439Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:05:56.184990 containerd[1462]: time="2024-12-13T01:05:56.184937198Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\"" Dec 13 01:05:56.185525 containerd[1462]: time="2024-12-13T01:05:56.185471471Z" level=info msg="StartContainer for \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\"" Dec 13 01:05:56.219813 systemd[1]: Started cri-containerd-246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a.scope - libcontainer container 246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a. Dec 13 01:05:56.245246 systemd[1]: cri-containerd-246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a.scope: Deactivated successfully. Dec 13 01:05:56.247829 containerd[1462]: time="2024-12-13T01:05:56.247777831Z" level=info msg="StartContainer for \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\" returns successfully" Dec 13 01:05:56.427255 containerd[1462]: time="2024-12-13T01:05:56.427091399Z" level=info msg="shim disconnected" id=246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a namespace=k8s.io Dec 13 01:05:56.427255 containerd[1462]: time="2024-12-13T01:05:56.427143997Z" level=warning msg="cleaning up after shim disconnected" id=246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a namespace=k8s.io Dec 13 01:05:56.427255 containerd[1462]: time="2024-12-13T01:05:56.427152203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:56.729621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a-rootfs.mount: Deactivated successfully. Dec 13 01:05:57.143743 kubelet[2518]: E1213 01:05:57.143693 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:57.145194 containerd[1462]: time="2024-12-13T01:05:57.145154715Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:05:57.168446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268546903.mount: Deactivated successfully. Dec 13 01:05:57.170449 containerd[1462]: time="2024-12-13T01:05:57.170406537Z" level=info msg="CreateContainer within sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\"" Dec 13 01:05:57.171062 containerd[1462]: time="2024-12-13T01:05:57.170999280Z" level=info msg="StartContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\"" Dec 13 01:05:57.205748 systemd[1]: Started cri-containerd-6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb.scope - libcontainer container 6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb. Dec 13 01:05:57.236965 containerd[1462]: time="2024-12-13T01:05:57.236893658Z" level=info msg="StartContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" returns successfully" Dec 13 01:05:57.363991 kubelet[2518]: I1213 01:05:57.363952 2518 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:05:57.527748 systemd[1]: Created slice kubepods-burstable-pod8955ac86_f795_4276_9132_4787d5c8c838.slice - libcontainer container kubepods-burstable-pod8955ac86_f795_4276_9132_4787d5c8c838.slice. Dec 13 01:05:57.537376 systemd[1]: Created slice kubepods-burstable-pod3dcb8677_a586_4771_9f72_9a18c54801d2.slice - libcontainer container kubepods-burstable-pod3dcb8677_a586_4771_9f72_9a18c54801d2.slice. Dec 13 01:05:57.587998 kubelet[2518]: I1213 01:05:57.587942 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w58l\" (UniqueName: \"kubernetes.io/projected/8955ac86-f795-4276-9132-4787d5c8c838-kube-api-access-4w58l\") pod \"coredns-6f6b679f8f-6jth2\" (UID: \"8955ac86-f795-4276-9132-4787d5c8c838\") " pod="kube-system/coredns-6f6b679f8f-6jth2" Dec 13 01:05:57.587998 kubelet[2518]: I1213 01:05:57.587992 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dcb8677-a586-4771-9f72-9a18c54801d2-config-volume\") pod \"coredns-6f6b679f8f-tdqth\" (UID: \"3dcb8677-a586-4771-9f72-9a18c54801d2\") " pod="kube-system/coredns-6f6b679f8f-tdqth" Dec 13 01:05:57.588164 kubelet[2518]: I1213 01:05:57.588018 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8955ac86-f795-4276-9132-4787d5c8c838-config-volume\") pod \"coredns-6f6b679f8f-6jth2\" (UID: \"8955ac86-f795-4276-9132-4787d5c8c838\") " pod="kube-system/coredns-6f6b679f8f-6jth2" Dec 13 01:05:57.588164 kubelet[2518]: I1213 01:05:57.588040 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksrc\" (UniqueName: \"kubernetes.io/projected/3dcb8677-a586-4771-9f72-9a18c54801d2-kube-api-access-gksrc\") pod \"coredns-6f6b679f8f-tdqth\" (UID: \"3dcb8677-a586-4771-9f72-9a18c54801d2\") " pod="kube-system/coredns-6f6b679f8f-tdqth" Dec 13 01:05:57.731858 systemd[1]: run-containerd-runc-k8s.io-6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb-runc.iLWNZp.mount: Deactivated successfully. Dec 13 01:05:57.833773 kubelet[2518]: E1213 01:05:57.833615 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:57.834731 containerd[1462]: time="2024-12-13T01:05:57.834644020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6jth2,Uid:8955ac86-f795-4276-9132-4787d5c8c838,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:57.839882 kubelet[2518]: E1213 01:05:57.839846 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:57.840321 containerd[1462]: time="2024-12-13T01:05:57.840275779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tdqth,Uid:3dcb8677-a586-4771-9f72-9a18c54801d2,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:58.168562 kubelet[2518]: E1213 01:05:58.168507 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:05:59.168602 kubelet[2518]: E1213 01:05:59.168560 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:00.170110 kubelet[2518]: E1213 01:06:00.170077 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:00.343414 containerd[1462]: time="2024-12-13T01:06:00.343347982Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:00.344082 containerd[1462]: time="2024-12-13T01:06:00.344015555Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907201" Dec 13 01:06:00.345298 containerd[1462]: time="2024-12-13T01:06:00.345265179Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:00.346591 containerd[1462]: time="2024-12-13T01:06:00.346541073Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.640822232s" Dec 13 01:06:00.346627 containerd[1462]: time="2024-12-13T01:06:00.346594684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:06:00.352342 containerd[1462]: time="2024-12-13T01:06:00.352307224Z" level=info msg="CreateContainer within sandbox \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:06:00.368496 containerd[1462]: time="2024-12-13T01:06:00.368443795Z" level=info msg="CreateContainer within sandbox \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\"" Dec 13 01:06:00.369139 containerd[1462]: time="2024-12-13T01:06:00.369095458Z" level=info msg="StartContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\"" Dec 13 01:06:00.424707 systemd[1]: Started cri-containerd-6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08.scope - libcontainer container 6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08. Dec 13 01:06:00.465139 containerd[1462]: time="2024-12-13T01:06:00.465092418Z" level=info msg="StartContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" returns successfully" Dec 13 01:06:01.190182 kubelet[2518]: E1213 01:06:01.190140 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:01.190182 kubelet[2518]: E1213 01:06:01.190192 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:01.232673 kubelet[2518]: I1213 01:06:01.232406 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9smlw" podStartSLOduration=9.135676429 podStartE2EDuration="21.232389462s" podCreationTimestamp="2024-12-13 01:05:40 +0000 UTC" firstStartedPulling="2024-12-13 01:05:40.608810471 +0000 UTC m=+5.864780715" lastFinishedPulling="2024-12-13 01:05:52.705523504 +0000 UTC m=+17.961493748" observedRunningTime="2024-12-13 01:05:58.190139347 +0000 UTC m=+23.446109611" watchObservedRunningTime="2024-12-13 01:06:01.232389462 +0000 UTC m=+26.488359706" Dec 13 01:06:01.232673 kubelet[2518]: I1213 01:06:01.232549 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2n4k8" podStartSLOduration=1.792899755 podStartE2EDuration="21.232545765s" podCreationTimestamp="2024-12-13 01:05:40 +0000 UTC" firstStartedPulling="2024-12-13 01:05:40.907748304 +0000 UTC m=+6.163718548" lastFinishedPulling="2024-12-13 01:06:00.347394314 +0000 UTC m=+25.603364558" observedRunningTime="2024-12-13 01:06:01.232347964 +0000 UTC m=+26.488318208" watchObservedRunningTime="2024-12-13 01:06:01.232545765 +0000 UTC m=+26.488516009" Dec 13 01:06:02.175174 kubelet[2518]: E1213 01:06:02.175139 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:03.270170 systemd-networkd[1399]: cilium_host: Link UP Dec 13 01:06:03.270395 systemd-networkd[1399]: cilium_net: Link UP Dec 13 01:06:03.272463 systemd-networkd[1399]: cilium_net: Gained carrier Dec 13 01:06:03.272775 systemd-networkd[1399]: cilium_host: Gained carrier Dec 13 01:06:03.303989 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:40314.service - OpenSSH per-connection server daemon (10.0.0.1:40314). Dec 13 01:06:03.344077 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 40314 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:03.345807 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:03.350099 systemd-logind[1445]: New session 10 of user core. Dec 13 01:06:03.355701 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:06:03.382541 systemd-networkd[1399]: cilium_vxlan: Link UP Dec 13 01:06:03.382549 systemd-networkd[1399]: cilium_vxlan: Gained carrier Dec 13 01:06:03.481780 sshd[3397]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:03.486516 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:40314.service: Deactivated successfully. Dec 13 01:06:03.489200 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:06:03.489914 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:06:03.491084 systemd-logind[1445]: Removed session 10. Dec 13 01:06:03.603600 kernel: NET: Registered PF_ALG protocol family Dec 13 01:06:03.689748 systemd-networkd[1399]: cilium_host: Gained IPv6LL Dec 13 01:06:03.826718 systemd-networkd[1399]: cilium_net: Gained IPv6LL Dec 13 01:06:04.261538 systemd-networkd[1399]: lxc_health: Link UP Dec 13 01:06:04.275748 systemd-networkd[1399]: lxc_health: Gained carrier Dec 13 01:06:04.371195 kubelet[2518]: E1213 01:06:04.371142 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:04.429845 systemd-networkd[1399]: lxc4ef9e5c3859d: Link UP Dec 13 01:06:04.434363 systemd-networkd[1399]: lxc94d73b1ec728: Link UP Dec 13 01:06:04.445620 kernel: eth0: renamed from tmp1a46b Dec 13 01:06:04.454603 kernel: eth0: renamed from tmpe79df Dec 13 01:06:04.460692 systemd-networkd[1399]: lxc94d73b1ec728: Gained carrier Dec 13 01:06:04.460924 systemd-networkd[1399]: lxc4ef9e5c3859d: Gained carrier Dec 13 01:06:04.595278 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Dec 13 01:06:05.180186 kubelet[2518]: E1213 01:06:05.180145 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:05.745881 systemd-networkd[1399]: lxc94d73b1ec728: Gained IPv6LL Dec 13 01:06:05.873753 systemd-networkd[1399]: lxc4ef9e5c3859d: Gained IPv6LL Dec 13 01:06:06.257771 systemd-networkd[1399]: lxc_health: Gained IPv6LL Dec 13 01:06:07.897384 containerd[1462]: time="2024-12-13T01:06:07.897288668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:07.897867 containerd[1462]: time="2024-12-13T01:06:07.897358799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:07.898433 containerd[1462]: time="2024-12-13T01:06:07.898315373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.898433 containerd[1462]: time="2024-12-13T01:06:07.898396425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.913242 containerd[1462]: time="2024-12-13T01:06:07.912945641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:07.913242 containerd[1462]: time="2024-12-13T01:06:07.913005594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:07.913242 containerd[1462]: time="2024-12-13T01:06:07.913028306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.913242 containerd[1462]: time="2024-12-13T01:06:07.913114468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.928741 systemd[1]: Started cri-containerd-e79df3092187c6c50004c72376c2083c64f3bb06543349d1de1905d967e65b40.scope - libcontainer container e79df3092187c6c50004c72376c2083c64f3bb06543349d1de1905d967e65b40. Dec 13 01:06:07.933855 systemd[1]: Started cri-containerd-1a46b20dfbe9fc3cdd3483194e1ce7b4b9656c3ea375cc6d063f28cda98813c3.scope - libcontainer container 1a46b20dfbe9fc3cdd3483194e1ce7b4b9656c3ea375cc6d063f28cda98813c3. Dec 13 01:06:07.942162 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:06:07.949860 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:06:07.970861 containerd[1462]: time="2024-12-13T01:06:07.970746732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6jth2,Uid:8955ac86-f795-4276-9132-4787d5c8c838,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79df3092187c6c50004c72376c2083c64f3bb06543349d1de1905d967e65b40\"" Dec 13 01:06:07.971616 kubelet[2518]: E1213 01:06:07.971594 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:07.975183 containerd[1462]: time="2024-12-13T01:06:07.975133662Z" level=info msg="CreateContainer within sandbox \"e79df3092187c6c50004c72376c2083c64f3bb06543349d1de1905d967e65b40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:06:07.981470 containerd[1462]: time="2024-12-13T01:06:07.981424525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tdqth,Uid:3dcb8677-a586-4771-9f72-9a18c54801d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a46b20dfbe9fc3cdd3483194e1ce7b4b9656c3ea375cc6d063f28cda98813c3\"" Dec 13 01:06:07.982251 kubelet[2518]: E1213 01:06:07.982215 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:07.984295 containerd[1462]: time="2024-12-13T01:06:07.984246098Z" level=info msg="CreateContainer within sandbox \"1a46b20dfbe9fc3cdd3483194e1ce7b4b9656c3ea375cc6d063f28cda98813c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:06:07.997456 containerd[1462]: time="2024-12-13T01:06:07.997403332Z" level=info msg="CreateContainer within sandbox \"e79df3092187c6c50004c72376c2083c64f3bb06543349d1de1905d967e65b40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"124ff4ede60f5f8c419667acb90bdf8537904eb19da6971461c0beb8099b0d1a\"" Dec 13 01:06:07.997872 containerd[1462]: time="2024-12-13T01:06:07.997816407Z" level=info msg="StartContainer for \"124ff4ede60f5f8c419667acb90bdf8537904eb19da6971461c0beb8099b0d1a\"" Dec 13 01:06:08.004485 containerd[1462]: time="2024-12-13T01:06:08.004433822Z" level=info msg="CreateContainer within sandbox \"1a46b20dfbe9fc3cdd3483194e1ce7b4b9656c3ea375cc6d063f28cda98813c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1867f5aea386c2b094128ea8e47ddd0c71313036ca3ec89a8c6c06415ee9cc5d\"" Dec 13 01:06:08.005152 containerd[1462]: time="2024-12-13T01:06:08.005113587Z" level=info msg="StartContainer for \"1867f5aea386c2b094128ea8e47ddd0c71313036ca3ec89a8c6c06415ee9cc5d\"" Dec 13 01:06:08.025862 systemd[1]: Started cri-containerd-124ff4ede60f5f8c419667acb90bdf8537904eb19da6971461c0beb8099b0d1a.scope - libcontainer container 124ff4ede60f5f8c419667acb90bdf8537904eb19da6971461c0beb8099b0d1a. Dec 13 01:06:08.047785 systemd[1]: Started cri-containerd-1867f5aea386c2b094128ea8e47ddd0c71313036ca3ec89a8c6c06415ee9cc5d.scope - libcontainer container 1867f5aea386c2b094128ea8e47ddd0c71313036ca3ec89a8c6c06415ee9cc5d. Dec 13 01:06:08.072299 containerd[1462]: time="2024-12-13T01:06:08.072249728Z" level=info msg="StartContainer for \"124ff4ede60f5f8c419667acb90bdf8537904eb19da6971461c0beb8099b0d1a\" returns successfully" Dec 13 01:06:08.083010 containerd[1462]: time="2024-12-13T01:06:08.082953389Z" level=info msg="StartContainer for \"1867f5aea386c2b094128ea8e47ddd0c71313036ca3ec89a8c6c06415ee9cc5d\" returns successfully" Dec 13 01:06:08.189280 kubelet[2518]: E1213 01:06:08.189143 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:08.192383 kubelet[2518]: E1213 01:06:08.192332 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:08.204366 kubelet[2518]: I1213 01:06:08.204296 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tdqth" podStartSLOduration=28.20427783 podStartE2EDuration="28.20427783s" podCreationTimestamp="2024-12-13 01:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:06:08.202718765 +0000 UTC m=+33.458689009" watchObservedRunningTime="2024-12-13 01:06:08.20427783 +0000 UTC m=+33.460248074" Dec 13 01:06:08.496713 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:39870.service - OpenSSH per-connection server daemon (10.0.0.1:39870). Dec 13 01:06:08.539122 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 39870 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:08.540721 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:08.544570 systemd-logind[1445]: New session 11 of user core. Dec 13 01:06:08.559716 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:06:08.671098 sshd[3934]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:08.674778 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:39870.service: Deactivated successfully. Dec 13 01:06:08.676931 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:06:08.677514 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:06:08.678276 systemd-logind[1445]: Removed session 11. Dec 13 01:06:09.194700 kubelet[2518]: E1213 01:06:09.194509 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:09.194700 kubelet[2518]: E1213 01:06:09.194617 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:09.207829 kubelet[2518]: I1213 01:06:09.207765 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6jth2" podStartSLOduration=29.207746772 podStartE2EDuration="29.207746772s" podCreationTimestamp="2024-12-13 01:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:06:08.225116154 +0000 UTC m=+33.481086398" watchObservedRunningTime="2024-12-13 01:06:09.207746772 +0000 UTC m=+34.463717016" Dec 13 01:06:10.195920 kubelet[2518]: E1213 01:06:10.195884 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:10.196389 kubelet[2518]: E1213 01:06:10.196065 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:13.683291 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:39878.service - OpenSSH per-connection server daemon (10.0.0.1:39878). Dec 13 01:06:13.723252 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 39878 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:13.725010 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:13.729236 systemd-logind[1445]: New session 12 of user core. Dec 13 01:06:13.738752 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:06:13.874938 sshd[3956]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:13.878674 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:39878.service: Deactivated successfully. Dec 13 01:06:13.880511 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:06:13.881110 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:06:13.882113 systemd-logind[1445]: Removed session 12. Dec 13 01:06:18.888037 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:45604.service - OpenSSH per-connection server daemon (10.0.0.1:45604). Dec 13 01:06:18.928201 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 45604 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:18.930441 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:18.935945 systemd-logind[1445]: New session 13 of user core. Dec 13 01:06:18.942860 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:06:19.047962 sshd[3973]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:19.052766 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:45604.service: Deactivated successfully. Dec 13 01:06:19.055260 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:06:19.056023 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:06:19.057187 systemd-logind[1445]: Removed session 13. Dec 13 01:06:24.061750 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:45610.service - OpenSSH per-connection server daemon (10.0.0.1:45610). Dec 13 01:06:24.102074 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 45610 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:24.104245 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:24.108730 systemd-logind[1445]: New session 14 of user core. Dec 13 01:06:24.123838 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:06:24.232182 sshd[3988]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:24.239632 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:45610.service: Deactivated successfully. Dec 13 01:06:24.241702 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:06:24.243454 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:06:24.262050 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:45626.service - OpenSSH per-connection server daemon (10.0.0.1:45626). Dec 13 01:06:24.263069 systemd-logind[1445]: Removed session 14. Dec 13 01:06:24.299047 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 45626 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:24.300779 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:24.304823 systemd-logind[1445]: New session 15 of user core. Dec 13 01:06:24.314731 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:06:24.469182 sshd[4003]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:24.479114 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:45626.service: Deactivated successfully. Dec 13 01:06:24.482341 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:06:24.483714 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:06:24.497013 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Dec 13 01:06:24.498011 systemd-logind[1445]: Removed session 15. Dec 13 01:06:24.534563 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:24.536133 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:24.540134 systemd-logind[1445]: New session 16 of user core. Dec 13 01:06:24.553751 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:06:24.673699 sshd[4016]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:24.678078 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:45638.service: Deactivated successfully. Dec 13 01:06:24.680476 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:06:24.681368 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:06:24.682433 systemd-logind[1445]: Removed session 16. Dec 13 01:06:29.685795 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:44242.service - OpenSSH per-connection server daemon (10.0.0.1:44242). Dec 13 01:06:29.724785 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:29.726486 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:29.730941 systemd-logind[1445]: New session 17 of user core. Dec 13 01:06:29.744782 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:06:29.856390 sshd[4032]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:29.859973 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:44242.service: Deactivated successfully. Dec 13 01:06:29.861987 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:06:29.863804 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:06:29.864894 systemd-logind[1445]: Removed session 17. Dec 13 01:06:34.872760 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:44252.service - OpenSSH per-connection server daemon (10.0.0.1:44252). Dec 13 01:06:34.917193 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 44252 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:34.918890 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:34.922529 systemd-logind[1445]: New session 18 of user core. Dec 13 01:06:34.932693 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:06:35.039420 sshd[4048]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:35.049346 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:44252.service: Deactivated successfully. Dec 13 01:06:35.051234 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:06:35.052685 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:06:35.060812 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Dec 13 01:06:35.061620 systemd-logind[1445]: Removed session 18. Dec 13 01:06:35.095737 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:35.097414 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:35.101431 systemd-logind[1445]: New session 19 of user core. Dec 13 01:06:35.108736 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:06:35.337632 sshd[4063]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:35.349653 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:44260.service: Deactivated successfully. Dec 13 01:06:35.351949 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:06:35.353799 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:06:35.361865 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:44270.service - OpenSSH per-connection server daemon (10.0.0.1:44270). Dec 13 01:06:35.362986 systemd-logind[1445]: Removed session 19. Dec 13 01:06:35.400189 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 44270 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:35.402009 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:35.406250 systemd-logind[1445]: New session 20 of user core. Dec 13 01:06:35.414719 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:06:36.998751 sshd[4075]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:37.011024 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:44270.service: Deactivated successfully. Dec 13 01:06:37.013629 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:06:37.016011 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:06:37.026080 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). Dec 13 01:06:37.027681 systemd-logind[1445]: Removed session 20. Dec 13 01:06:37.062660 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:37.064516 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:37.068814 systemd-logind[1445]: New session 21 of user core. Dec 13 01:06:37.079806 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:06:37.295864 sshd[4098]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:37.304277 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:44274.service: Deactivated successfully. Dec 13 01:06:37.307290 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:06:37.309223 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:06:37.321002 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:44278.service - OpenSSH per-connection server daemon (10.0.0.1:44278). Dec 13 01:06:37.322347 systemd-logind[1445]: Removed session 21. Dec 13 01:06:37.356274 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 44278 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:37.357966 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:37.362164 systemd-logind[1445]: New session 22 of user core. Dec 13 01:06:37.380727 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:06:37.488353 sshd[4110]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:37.492793 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:44278.service: Deactivated successfully. Dec 13 01:06:37.495016 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:06:37.495610 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:06:37.496588 systemd-logind[1445]: Removed session 22. Dec 13 01:06:42.499361 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:44944.service - OpenSSH per-connection server daemon (10.0.0.1:44944). Dec 13 01:06:42.538524 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 44944 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:42.540201 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:42.544349 systemd-logind[1445]: New session 23 of user core. Dec 13 01:06:42.552727 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:06:42.659370 sshd[4126]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:42.663650 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:44944.service: Deactivated successfully. Dec 13 01:06:42.665552 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:06:42.666371 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:06:42.667557 systemd-logind[1445]: Removed session 23. Dec 13 01:06:47.675448 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:44946.service - OpenSSH per-connection server daemon (10.0.0.1:44946). Dec 13 01:06:47.717021 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 44946 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:47.719014 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:47.724495 systemd-logind[1445]: New session 24 of user core. Dec 13 01:06:47.735794 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:06:47.852156 sshd[4143]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:47.857101 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:44946.service: Deactivated successfully. Dec 13 01:06:47.860065 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:06:47.860887 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:06:47.862097 systemd-logind[1445]: Removed session 24. Dec 13 01:06:52.864750 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:36896.service - OpenSSH per-connection server daemon (10.0.0.1:36896). Dec 13 01:06:52.904489 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 36896 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:52.906176 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:52.910063 systemd-logind[1445]: New session 25 of user core. Dec 13 01:06:52.919698 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:06:53.035313 sshd[4158]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:53.039315 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:36896.service: Deactivated successfully. Dec 13 01:06:53.041455 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:06:53.042146 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:06:53.043088 systemd-logind[1445]: Removed session 25. Dec 13 01:06:58.051903 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:42694.service - OpenSSH per-connection server daemon (10.0.0.1:42694). Dec 13 01:06:58.094892 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 42694 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:58.097181 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:58.101886 systemd-logind[1445]: New session 26 of user core. Dec 13 01:06:58.113892 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:06:58.227843 sshd[4173]: pam_unix(sshd:session): session closed for user core Dec 13 01:06:58.236383 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:42694.service: Deactivated successfully. Dec 13 01:06:58.239128 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:06:58.241461 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:06:58.250099 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Dec 13 01:06:58.251182 systemd-logind[1445]: Removed session 26. Dec 13 01:06:58.287796 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:06:58.289752 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:06:58.294296 systemd-logind[1445]: New session 27 of user core. Dec 13 01:06:58.303736 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:06:58.847683 kubelet[2518]: E1213 01:06:58.847640 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:06:59.839091 containerd[1462]: time="2024-12-13T01:06:59.839036142Z" level=info msg="StopContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" with timeout 30 (s)" Dec 13 01:06:59.839782 containerd[1462]: time="2024-12-13T01:06:59.839729437Z" level=info msg="Stop container \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" with signal terminated" Dec 13 01:06:59.854138 systemd[1]: cri-containerd-6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08.scope: Deactivated successfully. Dec 13 01:06:59.874427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08-rootfs.mount: Deactivated successfully. Dec 13 01:06:59.874794 containerd[1462]: time="2024-12-13T01:06:59.874604609Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:06:59.905408 kubelet[2518]: E1213 01:06:59.905350 2518 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:06:59.908613 containerd[1462]: time="2024-12-13T01:06:59.908549270Z" level=info msg="StopContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" with timeout 2 (s)" Dec 13 01:06:59.908968 containerd[1462]: time="2024-12-13T01:06:59.908939744Z" level=info msg="Stop container \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" with signal terminated" Dec 13 01:06:59.916910 systemd-networkd[1399]: lxc_health: Link DOWN Dec 13 01:06:59.916919 systemd-networkd[1399]: lxc_health: Lost carrier Dec 13 01:06:59.955163 systemd[1]: cri-containerd-6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb.scope: Deactivated successfully. Dec 13 01:06:59.955625 systemd[1]: cri-containerd-6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb.scope: Consumed 6.901s CPU time. Dec 13 01:06:59.977018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb-rootfs.mount: Deactivated successfully. Dec 13 01:07:00.066626 containerd[1462]: time="2024-12-13T01:07:00.066529251Z" level=info msg="shim disconnected" id=6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08 namespace=k8s.io Dec 13 01:07:00.066626 containerd[1462]: time="2024-12-13T01:07:00.066613089Z" level=warning msg="cleaning up after shim disconnected" id=6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08 namespace=k8s.io Dec 13 01:07:00.066626 containerd[1462]: time="2024-12-13T01:07:00.066627977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:00.175524 containerd[1462]: time="2024-12-13T01:07:00.175442619Z" level=info msg="shim disconnected" id=6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb namespace=k8s.io Dec 13 01:07:00.175524 containerd[1462]: time="2024-12-13T01:07:00.175503153Z" level=warning msg="cleaning up after shim disconnected" id=6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb namespace=k8s.io Dec 13 01:07:00.175524 containerd[1462]: time="2024-12-13T01:07:00.175513502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:00.251875 containerd[1462]: time="2024-12-13T01:07:00.251810896Z" level=info msg="StopContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" returns successfully" Dec 13 01:07:00.256096 containerd[1462]: time="2024-12-13T01:07:00.256056235Z" level=info msg="StopPodSandbox for \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\"" Dec 13 01:07:00.256154 containerd[1462]: time="2024-12-13T01:07:00.256099066Z" level=info msg="Container to stop \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.259081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243-shm.mount: Deactivated successfully. Dec 13 01:07:00.263397 systemd[1]: cri-containerd-2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243.scope: Deactivated successfully. Dec 13 01:07:00.272819 containerd[1462]: time="2024-12-13T01:07:00.272765042Z" level=info msg="StopContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" returns successfully" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273341718Z" level=info msg="StopPodSandbox for \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\"" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273412821Z" level=info msg="Container to stop \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273434391Z" level=info msg="Container to stop \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273448378Z" level=info msg="Container to stop \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273462275Z" level=info msg="Container to stop \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.273475 containerd[1462]: time="2024-12-13T01:07:00.273476351Z" level=info msg="Container to stop \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:07:00.276855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662-shm.mount: Deactivated successfully. Dec 13 01:07:00.281009 systemd[1]: cri-containerd-1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662.scope: Deactivated successfully. Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.456807394Z" level=info msg="shim disconnected" id=1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662 namespace=k8s.io Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.457465142Z" level=warning msg="cleaning up after shim disconnected" id=1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662 namespace=k8s.io Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.457482405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.456924044Z" level=info msg="shim disconnected" id=2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243 namespace=k8s.io Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.457555291Z" level=warning msg="cleaning up after shim disconnected" id=2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243 namespace=k8s.io Dec 13 01:07:00.457755 containerd[1462]: time="2024-12-13T01:07:00.457563777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:00.480639 containerd[1462]: time="2024-12-13T01:07:00.480565267Z" level=info msg="TearDown network for sandbox \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\" successfully" Dec 13 01:07:00.480639 containerd[1462]: time="2024-12-13T01:07:00.480619228Z" level=info msg="StopPodSandbox for \"2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243\" returns successfully" Dec 13 01:07:00.481924 containerd[1462]: time="2024-12-13T01:07:00.481890121Z" level=info msg="TearDown network for sandbox \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" successfully" Dec 13 01:07:00.481924 containerd[1462]: time="2024-12-13T01:07:00.481914867Z" level=info msg="StopPodSandbox for \"1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662\" returns successfully" Dec 13 01:07:00.555776 kubelet[2518]: I1213 01:07:00.555709 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-xtables-lock\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.555776 kubelet[2518]: I1213 01:07:00.555772 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-net\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.555994 kubelet[2518]: I1213 01:07:00.555799 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-etc-cni-netd\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.555994 kubelet[2518]: I1213 01:07:00.555827 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-kernel\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.555994 kubelet[2518]: I1213 01:07:00.555831 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.555994 kubelet[2518]: I1213 01:07:00.555865 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.555994 kubelet[2518]: I1213 01:07:00.555856 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz486\" (UniqueName: \"kubernetes.io/projected/59efbc1a-670f-4b59-a713-9a29c54a33d1-kube-api-access-kz486\") pod \"59efbc1a-670f-4b59-a713-9a29c54a33d1\" (UID: \"59efbc1a-670f-4b59-a713-9a29c54a33d1\") " Dec 13 01:07:00.556117 kubelet[2518]: I1213 01:07:00.555900 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.556117 kubelet[2518]: I1213 01:07:00.555912 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.556117 kubelet[2518]: I1213 01:07:00.555949 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-run\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556117 kubelet[2518]: I1213 01:07:00.555991 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhhwd\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-kube-api-access-lhhwd\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556117 kubelet[2518]: I1213 01:07:00.556022 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be21cd31-2c6b-4e29-8fd5-8ae01c860506-clustermesh-secrets\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556048 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59efbc1a-670f-4b59-a713-9a29c54a33d1-cilium-config-path\") pod \"59efbc1a-670f-4b59-a713-9a29c54a33d1\" (UID: \"59efbc1a-670f-4b59-a713-9a29c54a33d1\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556074 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-lib-modules\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556096 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hostproc\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556118 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cni-path\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556141 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-bpf-maps\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556232 kubelet[2518]: I1213 01:07:00.556167 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-config-path\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556189 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-cgroup\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556215 2518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hubble-tls\") pod \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\" (UID: \"be21cd31-2c6b-4e29-8fd5-8ae01c860506\") " Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556273 2518 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556288 2518 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556302 2518 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.556360 kubelet[2518]: I1213 01:07:00.556315 2518 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.556489 kubelet[2518]: I1213 01:07:00.555997 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.556489 kubelet[2518]: I1213 01:07:00.556366 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hostproc" (OuterVolumeSpecName: "hostproc") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.559923 kubelet[2518]: I1213 01:07:00.559632 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.559923 kubelet[2518]: I1213 01:07:00.559667 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cni-path" (OuterVolumeSpecName: "cni-path") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.559923 kubelet[2518]: I1213 01:07:00.559740 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-kube-api-access-lhhwd" (OuterVolumeSpecName: "kube-api-access-lhhwd") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "kube-api-access-lhhwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:07:00.559923 kubelet[2518]: I1213 01:07:00.559771 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.559923 kubelet[2518]: I1213 01:07:00.559915 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:07:00.560160 kubelet[2518]: I1213 01:07:00.559984 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:07:00.560160 kubelet[2518]: I1213 01:07:00.560070 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be21cd31-2c6b-4e29-8fd5-8ae01c860506-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:07:00.560753 kubelet[2518]: I1213 01:07:00.560712 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59efbc1a-670f-4b59-a713-9a29c54a33d1-kube-api-access-kz486" (OuterVolumeSpecName: "kube-api-access-kz486") pod "59efbc1a-670f-4b59-a713-9a29c54a33d1" (UID: "59efbc1a-670f-4b59-a713-9a29c54a33d1"). InnerVolumeSpecName "kube-api-access-kz486". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:07:00.564058 kubelet[2518]: I1213 01:07:00.564032 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be21cd31-2c6b-4e29-8fd5-8ae01c860506" (UID: "be21cd31-2c6b-4e29-8fd5-8ae01c860506"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:07:00.564752 kubelet[2518]: I1213 01:07:00.564713 2518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59efbc1a-670f-4b59-a713-9a29c54a33d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59efbc1a-670f-4b59-a713-9a29c54a33d1" (UID: "59efbc1a-670f-4b59-a713-9a29c54a33d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:07:00.657321 kubelet[2518]: I1213 01:07:00.657259 2518 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be21cd31-2c6b-4e29-8fd5-8ae01c860506-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657321 kubelet[2518]: I1213 01:07:00.657296 2518 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59efbc1a-670f-4b59-a713-9a29c54a33d1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657321 kubelet[2518]: I1213 01:07:00.657307 2518 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657321 kubelet[2518]: I1213 01:07:00.657317 2518 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657321 kubelet[2518]: I1213 01:07:00.657339 2518 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657348 2518 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657358 2518 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657366 2518 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657373 2518 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657382 2518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kz486\" (UniqueName: \"kubernetes.io/projected/59efbc1a-670f-4b59-a713-9a29c54a33d1-kube-api-access-kz486\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657390 2518 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be21cd31-2c6b-4e29-8fd5-8ae01c860506-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.657799 kubelet[2518]: I1213 01:07:00.657397 2518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lhhwd\" (UniqueName: \"kubernetes.io/projected/be21cd31-2c6b-4e29-8fd5-8ae01c860506-kube-api-access-lhhwd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:07:00.848744 kubelet[2518]: E1213 01:07:00.847926 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:00.848981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2663c4590c03a4729717e2d5156986cc2d7ca1376eac0fda63008e8e1465a243-rootfs.mount: Deactivated successfully. Dec 13 01:07:00.849133 systemd[1]: var-lib-kubelet-pods-59efbc1a\x2d670f\x2d4b59\x2da713\x2d9a29c54a33d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkz486.mount: Deactivated successfully. Dec 13 01:07:00.849248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ba9a1d7330b523708bfbc769e52f502ccc64ddfd44442d8c5e959204d762662-rootfs.mount: Deactivated successfully. Dec 13 01:07:00.849400 systemd[1]: var-lib-kubelet-pods-be21cd31\x2d2c6b\x2d4e29\x2d8fd5\x2d8ae01c860506-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhhwd.mount: Deactivated successfully. Dec 13 01:07:00.849647 systemd[1]: var-lib-kubelet-pods-be21cd31\x2d2c6b\x2d4e29\x2d8fd5\x2d8ae01c860506-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:07:00.849853 systemd[1]: var-lib-kubelet-pods-be21cd31\x2d2c6b\x2d4e29\x2d8fd5\x2d8ae01c860506-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:07:00.856490 systemd[1]: Removed slice kubepods-besteffort-pod59efbc1a_670f_4b59_a713_9a29c54a33d1.slice - libcontainer container kubepods-besteffort-pod59efbc1a_670f_4b59_a713_9a29c54a33d1.slice. Dec 13 01:07:00.857928 systemd[1]: Removed slice kubepods-burstable-podbe21cd31_2c6b_4e29_8fd5_8ae01c860506.slice - libcontainer container kubepods-burstable-podbe21cd31_2c6b_4e29_8fd5_8ae01c860506.slice. Dec 13 01:07:00.858052 systemd[1]: kubepods-burstable-podbe21cd31_2c6b_4e29_8fd5_8ae01c860506.slice: Consumed 7.005s CPU time. Dec 13 01:07:01.302087 kubelet[2518]: I1213 01:07:01.301958 2518 scope.go:117] "RemoveContainer" containerID="6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb" Dec 13 01:07:01.305165 containerd[1462]: time="2024-12-13T01:07:01.305130431Z" level=info msg="RemoveContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\"" Dec 13 01:07:01.347882 containerd[1462]: time="2024-12-13T01:07:01.347823023Z" level=info msg="RemoveContainer for \"6bff01a33ae0c498fad6ae58149d7ba12c446d7521dc8a9588d09908712b9ebb\" returns successfully" Dec 13 01:07:01.348175 kubelet[2518]: I1213 01:07:01.348143 2518 scope.go:117] "RemoveContainer" containerID="246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a" Dec 13 01:07:01.349392 containerd[1462]: time="2024-12-13T01:07:01.349102090Z" level=info msg="RemoveContainer for \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\"" Dec 13 01:07:01.354706 containerd[1462]: time="2024-12-13T01:07:01.354649379Z" level=info msg="RemoveContainer for \"246f7b80867755b41e992d3a1310143fcbd244a40fb2239db2472e4f0975e64a\" returns successfully" Dec 13 01:07:01.354986 kubelet[2518]: I1213 01:07:01.354954 2518 scope.go:117] "RemoveContainer" containerID="3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182" Dec 13 01:07:01.357246 containerd[1462]: time="2024-12-13T01:07:01.357196573Z" level=info msg="RemoveContainer for \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\"" Dec 13 01:07:01.363965 containerd[1462]: time="2024-12-13T01:07:01.363898925Z" level=info msg="RemoveContainer for \"3bea864803070870f5032f57b2a44607a74202de60785cc129461fef188c8182\" returns successfully" Dec 13 01:07:01.364193 kubelet[2518]: I1213 01:07:01.364110 2518 scope.go:117] "RemoveContainer" containerID="0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516" Dec 13 01:07:01.365595 containerd[1462]: time="2024-12-13T01:07:01.365220392Z" level=info msg="RemoveContainer for \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\"" Dec 13 01:07:01.369666 containerd[1462]: time="2024-12-13T01:07:01.369635812Z" level=info msg="RemoveContainer for \"0ddd25a880d270f7c798c1ec212fec78cd8bbdc9162e5e2d0ac06904506c3516\" returns successfully" Dec 13 01:07:01.369817 kubelet[2518]: I1213 01:07:01.369792 2518 scope.go:117] "RemoveContainer" containerID="61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a" Dec 13 01:07:01.370716 containerd[1462]: time="2024-12-13T01:07:01.370693552Z" level=info msg="RemoveContainer for \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\"" Dec 13 01:07:01.387680 containerd[1462]: time="2024-12-13T01:07:01.387640363Z" level=info msg="RemoveContainer for \"61a9ea9840dcc28e9fe7b53fb11a5be5f44b9b2d50cb5df2a3aad8bdc9d0fd4a\" returns successfully" Dec 13 01:07:01.387860 kubelet[2518]: I1213 01:07:01.387834 2518 scope.go:117] "RemoveContainer" containerID="6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08" Dec 13 01:07:01.389009 containerd[1462]: time="2024-12-13T01:07:01.388951050Z" level=info msg="RemoveContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\"" Dec 13 01:07:01.392719 containerd[1462]: time="2024-12-13T01:07:01.392688584Z" level=info msg="RemoveContainer for \"6a18c9585f513ca095923000e3b12f6a0cbe151235a0dccb6b03f904bdb0ea08\" returns successfully" Dec 13 01:07:01.765851 sshd[4187]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:01.776004 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:42704.service: Deactivated successfully. Dec 13 01:07:01.778206 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:07:01.780549 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:07:01.790008 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:42716.service - OpenSSH per-connection server daemon (10.0.0.1:42716). Dec 13 01:07:01.791032 systemd-logind[1445]: Removed session 27. Dec 13 01:07:01.831707 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 42716 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:01.833629 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:01.838500 systemd-logind[1445]: New session 28 of user core. Dec 13 01:07:01.847704 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:07:02.362226 sshd[4351]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:02.376911 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:42716.service: Deactivated successfully. Dec 13 01:07:02.377323 kubelet[2518]: E1213 01:07:02.377293 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="mount-bpf-fs" Dec 13 01:07:02.377323 kubelet[2518]: E1213 01:07:02.377323 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="clean-cilium-state" Dec 13 01:07:02.377727 kubelet[2518]: E1213 01:07:02.377334 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="apply-sysctl-overwrites" Dec 13 01:07:02.377727 kubelet[2518]: E1213 01:07:02.377343 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="cilium-agent" Dec 13 01:07:02.377727 kubelet[2518]: E1213 01:07:02.377353 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59efbc1a-670f-4b59-a713-9a29c54a33d1" containerName="cilium-operator" Dec 13 01:07:02.377727 kubelet[2518]: E1213 01:07:02.377363 2518 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="mount-cgroup" Dec 13 01:07:02.377727 kubelet[2518]: I1213 01:07:02.377394 2518 memory_manager.go:354] "RemoveStaleState removing state" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" containerName="cilium-agent" Dec 13 01:07:02.377727 kubelet[2518]: I1213 01:07:02.377403 2518 memory_manager.go:354] "RemoveStaleState removing state" podUID="59efbc1a-670f-4b59-a713-9a29c54a33d1" containerName="cilium-operator" Dec 13 01:07:02.379852 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:07:02.382505 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:07:02.393044 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:42718.service - OpenSSH per-connection server daemon (10.0.0.1:42718). Dec 13 01:07:02.401365 systemd-logind[1445]: Removed session 28. Dec 13 01:07:02.408326 systemd[1]: Created slice kubepods-burstable-podfaaf04b2_d2c0_49ed_9636_93768ee2639d.slice - libcontainer container kubepods-burstable-podfaaf04b2_d2c0_49ed_9636_93768ee2639d.slice. Dec 13 01:07:02.432862 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:02.434910 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:02.439525 systemd-logind[1445]: New session 29 of user core. Dec 13 01:07:02.448722 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:07:02.465999 kubelet[2518]: I1213 01:07:02.465941 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-lib-modules\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466116 kubelet[2518]: I1213 01:07:02.466002 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-bpf-maps\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466116 kubelet[2518]: I1213 01:07:02.466033 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/faaf04b2-d2c0-49ed-9636-93768ee2639d-clustermesh-secrets\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466116 kubelet[2518]: I1213 01:07:02.466062 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-cilium-cgroup\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466116 kubelet[2518]: I1213 01:07:02.466085 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faaf04b2-d2c0-49ed-9636-93768ee2639d-cilium-config-path\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466116 kubelet[2518]: I1213 01:07:02.466107 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/faaf04b2-d2c0-49ed-9636-93768ee2639d-cilium-ipsec-secrets\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466231 kubelet[2518]: I1213 01:07:02.466133 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2h59\" (UniqueName: \"kubernetes.io/projected/faaf04b2-d2c0-49ed-9636-93768ee2639d-kube-api-access-r2h59\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466231 kubelet[2518]: I1213 01:07:02.466179 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-cni-path\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466231 kubelet[2518]: I1213 01:07:02.466208 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-etc-cni-netd\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466301 kubelet[2518]: I1213 01:07:02.466231 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-host-proc-sys-net\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466301 kubelet[2518]: I1213 01:07:02.466251 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-xtables-lock\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466301 kubelet[2518]: I1213 01:07:02.466284 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-host-proc-sys-kernel\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466376 kubelet[2518]: I1213 01:07:02.466315 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-hostproc\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466376 kubelet[2518]: I1213 01:07:02.466336 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/faaf04b2-d2c0-49ed-9636-93768ee2639d-hubble-tls\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.466420 kubelet[2518]: I1213 01:07:02.466396 2518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/faaf04b2-d2c0-49ed-9636-93768ee2639d-cilium-run\") pod \"cilium-qmlbv\" (UID: \"faaf04b2-d2c0-49ed-9636-93768ee2639d\") " pod="kube-system/cilium-qmlbv" Dec 13 01:07:02.500164 sshd[4365]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:02.507413 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:42718.service: Deactivated successfully. Dec 13 01:07:02.509254 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:07:02.510727 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:07:02.512192 systemd[1]: Started sshd@29-10.0.0.22:22-10.0.0.1:42732.service - OpenSSH per-connection server daemon (10.0.0.1:42732). Dec 13 01:07:02.513008 systemd-logind[1445]: Removed session 29. Dec 13 01:07:02.551915 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 42732 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:07:02.553531 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:02.557754 systemd-logind[1445]: New session 30 of user core. Dec 13 01:07:02.565686 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:07:02.715474 kubelet[2518]: E1213 01:07:02.715295 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:02.716516 containerd[1462]: time="2024-12-13T01:07:02.716359805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmlbv,Uid:faaf04b2-d2c0-49ed-9636-93768ee2639d,Namespace:kube-system,Attempt:0,}" Dec 13 01:07:02.743769 containerd[1462]: time="2024-12-13T01:07:02.743492870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:07:02.743769 containerd[1462]: time="2024-12-13T01:07:02.743622084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:07:02.743769 containerd[1462]: time="2024-12-13T01:07:02.743640348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:02.743769 containerd[1462]: time="2024-12-13T01:07:02.743740527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:07:02.769949 systemd[1]: Started cri-containerd-cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2.scope - libcontainer container cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2. Dec 13 01:07:02.795714 containerd[1462]: time="2024-12-13T01:07:02.795664352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmlbv,Uid:faaf04b2-d2c0-49ed-9636-93768ee2639d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\"" Dec 13 01:07:02.796371 kubelet[2518]: E1213 01:07:02.796329 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:02.799630 containerd[1462]: time="2024-12-13T01:07:02.799485552Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:07:02.814418 containerd[1462]: time="2024-12-13T01:07:02.814352667Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4\"" Dec 13 01:07:02.815025 containerd[1462]: time="2024-12-13T01:07:02.814992322Z" level=info msg="StartContainer for \"b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4\"" Dec 13 01:07:02.843901 systemd[1]: Started cri-containerd-b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4.scope - libcontainer container b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4. Dec 13 01:07:02.850481 kubelet[2518]: I1213 01:07:02.850426 2518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59efbc1a-670f-4b59-a713-9a29c54a33d1" path="/var/lib/kubelet/pods/59efbc1a-670f-4b59-a713-9a29c54a33d1/volumes" Dec 13 01:07:02.851981 kubelet[2518]: I1213 01:07:02.851961 2518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be21cd31-2c6b-4e29-8fd5-8ae01c860506" path="/var/lib/kubelet/pods/be21cd31-2c6b-4e29-8fd5-8ae01c860506/volumes" Dec 13 01:07:02.875046 containerd[1462]: time="2024-12-13T01:07:02.874984749Z" level=info msg="StartContainer for \"b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4\" returns successfully" Dec 13 01:07:02.888034 systemd[1]: cri-containerd-b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4.scope: Deactivated successfully. Dec 13 01:07:02.920596 containerd[1462]: time="2024-12-13T01:07:02.920484155Z" level=info msg="shim disconnected" id=b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4 namespace=k8s.io Dec 13 01:07:02.920596 containerd[1462]: time="2024-12-13T01:07:02.920571018Z" level=warning msg="cleaning up after shim disconnected" id=b6136ab418ccf2164a397cfd03e8c4bafa880189d64e4a2163ad8339672f16d4 namespace=k8s.io Dec 13 01:07:02.920596 containerd[1462]: time="2024-12-13T01:07:02.920600755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:03.312610 kubelet[2518]: E1213 01:07:03.312544 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:03.314943 containerd[1462]: time="2024-12-13T01:07:03.314253092Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:07:03.329773 containerd[1462]: time="2024-12-13T01:07:03.329716036Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93\"" Dec 13 01:07:03.330240 containerd[1462]: time="2024-12-13T01:07:03.330189678Z" level=info msg="StartContainer for \"7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93\"" Dec 13 01:07:03.383765 systemd[1]: Started cri-containerd-7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93.scope - libcontainer container 7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93. Dec 13 01:07:03.410353 containerd[1462]: time="2024-12-13T01:07:03.410303194Z" level=info msg="StartContainer for \"7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93\" returns successfully" Dec 13 01:07:03.419262 systemd[1]: cri-containerd-7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93.scope: Deactivated successfully. Dec 13 01:07:03.443174 containerd[1462]: time="2024-12-13T01:07:03.443100354Z" level=info msg="shim disconnected" id=7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93 namespace=k8s.io Dec 13 01:07:03.443174 containerd[1462]: time="2024-12-13T01:07:03.443164945Z" level=warning msg="cleaning up after shim disconnected" id=7d8a4c61d5ccd8a71bd74a893d7640efe2cacd2d87a7e13a3b72fc140c4d3c93 namespace=k8s.io Dec 13 01:07:03.443174 containerd[1462]: time="2024-12-13T01:07:03.443176627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:04.316202 kubelet[2518]: E1213 01:07:04.316169 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:04.319030 containerd[1462]: time="2024-12-13T01:07:04.318955527Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:07:04.332564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657328242.mount: Deactivated successfully. Dec 13 01:07:04.335144 containerd[1462]: time="2024-12-13T01:07:04.335092306Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a\"" Dec 13 01:07:04.335906 containerd[1462]: time="2024-12-13T01:07:04.335866202Z" level=info msg="StartContainer for \"d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a\"" Dec 13 01:07:04.370736 systemd[1]: Started cri-containerd-d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a.scope - libcontainer container d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a. Dec 13 01:07:04.406933 containerd[1462]: time="2024-12-13T01:07:04.406877703Z" level=info msg="StartContainer for \"d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a\" returns successfully" Dec 13 01:07:04.409498 systemd[1]: cri-containerd-d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a.scope: Deactivated successfully. Dec 13 01:07:04.435773 containerd[1462]: time="2024-12-13T01:07:04.435693768Z" level=info msg="shim disconnected" id=d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a namespace=k8s.io Dec 13 01:07:04.435773 containerd[1462]: time="2024-12-13T01:07:04.435754201Z" level=warning msg="cleaning up after shim disconnected" id=d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a namespace=k8s.io Dec 13 01:07:04.435773 containerd[1462]: time="2024-12-13T01:07:04.435763208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:04.573266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23c5af4279dc3b576f396a0d553c997c99adef3ed3004e5640f3ce0acbe2e1a-rootfs.mount: Deactivated successfully. Dec 13 01:07:04.906835 kubelet[2518]: E1213 01:07:04.906791 2518 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:07:05.319905 kubelet[2518]: E1213 01:07:05.319561 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:05.321732 containerd[1462]: time="2024-12-13T01:07:05.321675963Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:07:05.501089 containerd[1462]: time="2024-12-13T01:07:05.501026817Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111\"" Dec 13 01:07:05.501793 containerd[1462]: time="2024-12-13T01:07:05.501551023Z" level=info msg="StartContainer for \"f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111\"" Dec 13 01:07:05.535714 systemd[1]: Started cri-containerd-f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111.scope - libcontainer container f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111. Dec 13 01:07:05.559981 systemd[1]: cri-containerd-f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111.scope: Deactivated successfully. Dec 13 01:07:05.602049 containerd[1462]: time="2024-12-13T01:07:05.602004602Z" level=info msg="StartContainer for \"f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111\" returns successfully" Dec 13 01:07:05.620224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111-rootfs.mount: Deactivated successfully. Dec 13 01:07:05.699194 containerd[1462]: time="2024-12-13T01:07:05.699123346Z" level=info msg="shim disconnected" id=f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111 namespace=k8s.io Dec 13 01:07:05.699194 containerd[1462]: time="2024-12-13T01:07:05.699185092Z" level=warning msg="cleaning up after shim disconnected" id=f1f2569a63468be4e9d10b1bd5764d131801320d69a4b092ebc86a550ee23111 namespace=k8s.io Dec 13 01:07:05.699194 containerd[1462]: time="2024-12-13T01:07:05.699198387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:07:06.324295 kubelet[2518]: E1213 01:07:06.324261 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:06.326166 containerd[1462]: time="2024-12-13T01:07:06.326115319Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:07:06.585250 containerd[1462]: time="2024-12-13T01:07:06.585069830Z" level=info msg="CreateContainer within sandbox \"cb2d15616993deb952ade9701cee9b027ca0dbd5a45b66405186b07bfa7a80d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393\"" Dec 13 01:07:06.585817 containerd[1462]: time="2024-12-13T01:07:06.585786949Z" level=info msg="StartContainer for \"814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393\"" Dec 13 01:07:06.620891 systemd[1]: Started cri-containerd-814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393.scope - libcontainer container 814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393. Dec 13 01:07:06.653152 containerd[1462]: time="2024-12-13T01:07:06.653102357Z" level=info msg="StartContainer for \"814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393\" returns successfully" Dec 13 01:07:06.832394 kubelet[2518]: I1213 01:07:06.832325 2518 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:07:06Z","lastTransitionTime":"2024-12-13T01:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:07:07.122628 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:07:07.328291 kubelet[2518]: E1213 01:07:07.328256 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:08.716695 kubelet[2518]: E1213 01:07:08.716654 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:08.848061 kubelet[2518]: E1213 01:07:08.847972 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:10.192368 systemd-networkd[1399]: lxc_health: Link UP Dec 13 01:07:10.201961 systemd-networkd[1399]: lxc_health: Gained carrier Dec 13 01:07:10.717398 kubelet[2518]: E1213 01:07:10.717324 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:10.933342 kubelet[2518]: I1213 01:07:10.930556 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qmlbv" podStartSLOduration=8.930538016 podStartE2EDuration="8.930538016s" podCreationTimestamp="2024-12-13 01:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:07:07.341237407 +0000 UTC m=+92.597207651" watchObservedRunningTime="2024-12-13 01:07:10.930538016 +0000 UTC m=+96.186508260" Dec 13 01:07:11.249859 systemd[1]: run-containerd-runc-k8s.io-814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393-runc.ubpcYb.mount: Deactivated successfully. Dec 13 01:07:11.335005 kubelet[2518]: E1213 01:07:11.334960 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:11.848169 kubelet[2518]: E1213 01:07:11.848108 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:11.921829 systemd-networkd[1399]: lxc_health: Gained IPv6LL Dec 13 01:07:12.336270 kubelet[2518]: E1213 01:07:12.336126 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:15.557399 systemd[1]: run-containerd-runc-k8s.io-814568dfc34d607a5f19d65c3b846a43bd1c85fd28024ccd8cae05893550f393-runc.JtjOIv.mount: Deactivated successfully. Dec 13 01:07:15.603954 sshd[4375]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:15.608901 systemd[1]: sshd@29-10.0.0.22:22-10.0.0.1:42732.service: Deactivated successfully. Dec 13 01:07:15.611014 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:07:15.611656 systemd-logind[1445]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:07:15.612533 systemd-logind[1445]: Removed session 30. Dec 13 01:07:16.847985 kubelet[2518]: E1213 01:07:16.847937 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:07:16.847985 kubelet[2518]: E1213 01:07:16.848011 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"