Dec 13 01:18:10.887255 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:18:10.887284 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:18:10.887299 kernel: BIOS-provided physical RAM map: Dec 13 01:18:10.887307 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:18:10.887315 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:18:10.887324 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:18:10.887334 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:18:10.887342 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:18:10.887351 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:18:10.887362 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:18:10.887371 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:18:10.887379 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:18:10.887388 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:18:10.887396 kernel: NX (Execute Disable) protection: active Dec 13 01:18:10.887407 kernel: APIC: Static calls initialized Dec 13 01:18:10.887419 kernel: SMBIOS 2.8 present. Dec 13 01:18:10.887428 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:18:10.887437 kernel: Hypervisor detected: KVM Dec 13 01:18:10.887447 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:18:10.887456 kernel: kvm-clock: using sched offset of 2194312299 cycles Dec 13 01:18:10.887465 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:18:10.887475 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:18:10.887485 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:18:10.887495 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:18:10.887504 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:18:10.887518 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:18:10.887528 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:18:10.887537 kernel: Using GB pages for direct mapping Dec 13 01:18:10.887547 kernel: ACPI: Early table checksum verification disabled Dec 13 01:18:10.887566 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:18:10.887576 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887586 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887595 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887608 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:18:10.887618 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887628 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887637 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887646 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:18:10.887656 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:18:10.887666 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:18:10.887680 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:18:10.887693 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:18:10.887703 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:18:10.887713 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:18:10.887723 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:18:10.887732 kernel: No NUMA configuration found Dec 13 01:18:10.887742 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:18:10.887752 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:18:10.887765 kernel: Zone ranges: Dec 13 01:18:10.887775 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:18:10.887786 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:18:10.887796 kernel: Normal empty Dec 13 01:18:10.887805 kernel: Movable zone start for each node Dec 13 01:18:10.887815 kernel: Early memory node ranges Dec 13 01:18:10.887825 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:18:10.887835 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:18:10.887845 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:18:10.887859 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:18:10.887869 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:18:10.887878 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:18:10.887888 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:18:10.887898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:18:10.887908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:18:10.887918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:18:10.887928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:18:10.887938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:18:10.887951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:18:10.887961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:18:10.887971 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:18:10.887981 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:18:10.887991 kernel: TSC deadline timer available Dec 13 01:18:10.888001 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:18:10.888048 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:18:10.888059 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:18:10.888069 kernel: kvm-guest: setup PV sched yield Dec 13 01:18:10.888079 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:18:10.888093 kernel: Booting paravirtualized kernel on KVM Dec 13 01:18:10.888103 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:18:10.888113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:18:10.888123 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:18:10.888133 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:18:10.888143 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:18:10.888152 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:18:10.888163 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:18:10.888174 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:18:10.888188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:18:10.888198 kernel: random: crng init done Dec 13 01:18:10.888208 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:18:10.888218 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:18:10.888228 kernel: Fallback order for Node 0: 0 Dec 13 01:18:10.888238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:18:10.888247 kernel: Policy zone: DMA32 Dec 13 01:18:10.888257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:18:10.888271 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:18:10.888281 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:18:10.888291 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:18:10.888300 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:18:10.888310 kernel: Dynamic Preempt: voluntary Dec 13 01:18:10.888320 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:18:10.888331 kernel: rcu: RCU event tracing is enabled. Dec 13 01:18:10.888341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:18:10.888351 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:18:10.888365 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:18:10.888375 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:18:10.888384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:18:10.888394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:18:10.888404 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:18:10.888414 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:18:10.888423 kernel: Console: colour VGA+ 80x25 Dec 13 01:18:10.888433 kernel: printk: console [ttyS0] enabled Dec 13 01:18:10.888443 kernel: ACPI: Core revision 20230628 Dec 13 01:18:10.888456 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:18:10.888466 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:18:10.888476 kernel: x2apic enabled Dec 13 01:18:10.888486 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:18:10.888496 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:18:10.888506 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:18:10.888516 kernel: kvm-guest: setup PV IPIs Dec 13 01:18:10.888541 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:18:10.888551 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:18:10.888571 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:18:10.888582 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:18:10.888593 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:18:10.888607 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:18:10.888618 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:18:10.888628 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:18:10.888639 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:18:10.888649 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:18:10.888664 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:18:10.888674 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:18:10.888685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:18:10.888697 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:18:10.888709 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:18:10.888722 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:18:10.888733 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:18:10.888743 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:18:10.888757 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:18:10.888767 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:18:10.888778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:18:10.888788 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:18:10.888799 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:18:10.888809 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:18:10.888820 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:18:10.888830 kernel: landlock: Up and running. Dec 13 01:18:10.888840 kernel: SELinux: Initializing. Dec 13 01:18:10.888853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:18:10.888864 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:18:10.888874 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:18:10.888885 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:18:10.888896 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:18:10.888907 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:18:10.888917 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:18:10.888927 kernel: ... version: 0 Dec 13 01:18:10.888938 kernel: ... bit width: 48 Dec 13 01:18:10.888951 kernel: ... generic registers: 6 Dec 13 01:18:10.888961 kernel: ... value mask: 0000ffffffffffff Dec 13 01:18:10.888972 kernel: ... max period: 00007fffffffffff Dec 13 01:18:10.888982 kernel: ... fixed-purpose events: 0 Dec 13 01:18:10.888992 kernel: ... event mask: 000000000000003f Dec 13 01:18:10.889003 kernel: signal: max sigframe size: 1776 Dec 13 01:18:10.889037 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:18:10.889048 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:18:10.889058 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:18:10.889073 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:18:10.889083 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:18:10.889094 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:18:10.889104 kernel: smpboot: Max logical packages: 1 Dec 13 01:18:10.889114 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:18:10.889125 kernel: devtmpfs: initialized Dec 13 01:18:10.889135 kernel: x86/mm: Memory block size: 128MB Dec 13 01:18:10.889145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:18:10.889156 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:18:10.889169 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:18:10.889179 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:18:10.889189 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:18:10.889199 kernel: audit: type=2000 audit(1734052689.397:1): state=initialized audit_enabled=0 res=1 Dec 13 01:18:10.889209 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:18:10.889219 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:18:10.889230 kernel: cpuidle: using governor menu Dec 13 01:18:10.889240 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:18:10.889250 kernel: dca service started, version 1.12.1 Dec 13 01:18:10.889264 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:18:10.889274 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:18:10.889284 kernel: PCI: Using configuration type 1 for base access Dec 13 01:18:10.889295 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:18:10.889305 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:18:10.889315 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:18:10.889325 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:18:10.889336 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:18:10.889346 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:18:10.889359 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:18:10.889370 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:18:10.889380 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:18:10.889390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:18:10.889400 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:18:10.889410 kernel: ACPI: Interpreter enabled Dec 13 01:18:10.889420 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:18:10.889431 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:18:10.889441 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:18:10.889455 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:18:10.889465 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:18:10.889475 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:18:10.889724 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:18:10.889890 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:18:10.890094 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:18:10.890111 kernel: PCI host bridge to bus 0000:00 Dec 13 01:18:10.890279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:18:10.890429 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:18:10.890591 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:18:10.890745 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:18:10.890891 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:18:10.891060 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:18:10.891191 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:18:10.891362 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:18:10.891495 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:18:10.891629 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:18:10.891753 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:18:10.891874 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:18:10.891995 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:18:10.892145 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:18:10.892282 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:18:10.892403 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:18:10.892524 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:18:10.892670 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:18:10.892792 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:18:10.892912 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:18:10.893055 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:18:10.893194 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:18:10.893328 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:18:10.893449 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:18:10.893578 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:18:10.893701 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:18:10.893833 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:18:10.893958 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:18:10.894157 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:18:10.894291 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:18:10.894409 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:18:10.894535 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:18:10.894669 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:18:10.894680 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:18:10.894692 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:18:10.894700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:18:10.894708 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:18:10.894715 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:18:10.894723 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:18:10.894730 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:18:10.894738 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:18:10.894745 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:18:10.894753 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:18:10.894763 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:18:10.894770 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:18:10.894778 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:18:10.894786 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:18:10.894793 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:18:10.894801 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:18:10.894808 kernel: iommu: Default domain type: Translated Dec 13 01:18:10.894816 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:18:10.894823 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:18:10.894833 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:18:10.894841 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:18:10.894848 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:18:10.894970 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:18:10.895109 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:18:10.895241 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:18:10.895253 kernel: vgaarb: loaded Dec 13 01:18:10.895261 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:18:10.895272 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:18:10.895280 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:18:10.895288 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:18:10.895296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:18:10.895303 kernel: pnp: PnP ACPI init Dec 13 01:18:10.895432 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:18:10.895443 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:18:10.895451 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:18:10.895461 kernel: NET: Registered PF_INET protocol family Dec 13 01:18:10.895469 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:18:10.895477 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:18:10.895485 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:18:10.895493 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:18:10.895501 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:18:10.895508 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:18:10.895516 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:18:10.895524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:18:10.895534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:18:10.895541 kernel: NET: Registered PF_XDP protocol family Dec 13 01:18:10.895663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:18:10.895779 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:18:10.895889 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:18:10.895999 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:18:10.896168 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:18:10.896284 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:18:10.896298 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:18:10.896306 kernel: Initialise system trusted keyrings Dec 13 01:18:10.896314 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:18:10.896322 kernel: Key type asymmetric registered Dec 13 01:18:10.896329 kernel: Asymmetric key parser 'x509' registered Dec 13 01:18:10.896337 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:18:10.896344 kernel: io scheduler mq-deadline registered Dec 13 01:18:10.896352 kernel: io scheduler kyber registered Dec 13 01:18:10.896360 kernel: io scheduler bfq registered Dec 13 01:18:10.896368 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:18:10.896379 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:18:10.896387 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:18:10.896394 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:18:10.896402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:18:10.896410 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:18:10.896417 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:18:10.896425 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:18:10.896432 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:18:10.896569 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:18:10.896687 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:18:10.896697 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:18:10.896808 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:18:10 UTC (1734052690) Dec 13 01:18:10.896918 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:18:10.896928 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:18:10.896936 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:18:10.896943 kernel: Segment Routing with IPv6 Dec 13 01:18:10.896954 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:18:10.896962 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:18:10.896970 kernel: Key type dns_resolver registered Dec 13 01:18:10.896978 kernel: IPI shorthand broadcast: enabled Dec 13 01:18:10.896986 kernel: sched_clock: Marking stable (838002010, 105996233)->(993199975, -49201732) Dec 13 01:18:10.896993 kernel: registered taskstats version 1 Dec 13 01:18:10.897001 kernel: Loading compiled-in X.509 certificates Dec 13 01:18:10.897025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:18:10.897033 kernel: Key type .fscrypt registered Dec 13 01:18:10.897040 kernel: Key type fscrypt-provisioning registered Dec 13 01:18:10.897050 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:18:10.897058 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:18:10.897066 kernel: ima: No architecture policies found Dec 13 01:18:10.897073 kernel: clk: Disabling unused clocks Dec 13 01:18:10.897081 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:18:10.897089 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:18:10.897096 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:18:10.897104 kernel: Run /init as init process Dec 13 01:18:10.897113 kernel: with arguments: Dec 13 01:18:10.897121 kernel: /init Dec 13 01:18:10.897128 kernel: with environment: Dec 13 01:18:10.897135 kernel: HOME=/ Dec 13 01:18:10.897143 kernel: TERM=linux Dec 13 01:18:10.897151 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:18:10.897162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:18:10.897175 systemd[1]: Detected virtualization kvm. Dec 13 01:18:10.897189 systemd[1]: Detected architecture x86-64. Dec 13 01:18:10.897197 systemd[1]: Running in initrd. Dec 13 01:18:10.897205 systemd[1]: No hostname configured, using default hostname. Dec 13 01:18:10.897213 systemd[1]: Hostname set to . Dec 13 01:18:10.897221 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:18:10.897229 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:18:10.897237 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:18:10.897245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:18:10.897257 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:18:10.897276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:18:10.897287 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:18:10.897295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:18:10.897305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:18:10.897316 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:18:10.897324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:18:10.897333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:18:10.897341 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:18:10.897349 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:18:10.897358 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:18:10.897366 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:18:10.897374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:18:10.897385 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:18:10.897393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:18:10.897401 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:18:10.897409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:18:10.897418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:18:10.897426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:18:10.897434 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:18:10.897443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:18:10.897451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:18:10.897462 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:18:10.897470 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:18:10.897478 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:18:10.897487 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:18:10.897495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:18:10.897503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:18:10.897512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:18:10.897520 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:18:10.897531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:18:10.897571 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:18:10.897595 systemd-journald[193]: Journal started Dec 13 01:18:10.897616 systemd-journald[193]: Runtime Journal (/run/log/journal/3f92efba5658491f8f1e7b98768077bb) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:18:10.885107 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:18:10.918785 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:18:10.918815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:18:10.918827 kernel: Bridge firewalling registered Dec 13 01:18:10.912306 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:18:10.919099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:18:10.919614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:18:10.935318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:18:10.938482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:18:10.942203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:18:10.942577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:18:10.945368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:18:10.952990 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:18:10.956567 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:18:10.957841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:18:10.960623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:18:10.966279 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:18:10.973501 dracut-cmdline[224]: dracut-dracut-053 Dec 13 01:18:10.977481 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:18:10.975192 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:18:11.009737 systemd-resolved[232]: Positive Trust Anchors: Dec 13 01:18:11.009754 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:18:11.009786 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:18:11.012293 systemd-resolved[232]: Defaulting to hostname 'linux'. Dec 13 01:18:11.013363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:18:11.019904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:18:11.081046 kernel: SCSI subsystem initialized Dec 13 01:18:11.090042 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:18:11.101048 kernel: iscsi: registered transport (tcp) Dec 13 01:18:11.122265 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:18:11.122293 kernel: QLogic iSCSI HBA Driver Dec 13 01:18:11.174403 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:18:11.187162 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:18:11.211465 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:18:11.211491 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:18:11.212513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:18:11.254031 kernel: raid6: avx2x4 gen() 28961 MB/s Dec 13 01:18:11.271031 kernel: raid6: avx2x2 gen() 31160 MB/s Dec 13 01:18:11.288170 kernel: raid6: avx2x1 gen() 25727 MB/s Dec 13 01:18:11.288185 kernel: raid6: using algorithm avx2x2 gen() 31160 MB/s Dec 13 01:18:11.306132 kernel: raid6: .... xor() 19888 MB/s, rmw enabled Dec 13 01:18:11.306145 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:18:11.326037 kernel: xor: automatically using best checksumming function avx Dec 13 01:18:11.481056 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:18:11.495594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:18:11.511158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:18:11.522424 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:18:11.527055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:18:11.543171 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:18:11.559467 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Dec 13 01:18:11.594637 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:18:11.606138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:18:11.667625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:18:11.676342 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:18:11.688743 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:18:11.691475 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:18:11.695553 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:18:11.718500 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:18:11.718661 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:18:11.718673 kernel: GPT:9289727 != 19775487 Dec 13 01:18:11.718684 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:18:11.718694 kernel: GPT:9289727 != 19775487 Dec 13 01:18:11.718704 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:18:11.718713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:18:11.718723 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:18:11.696073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:18:11.697380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:18:11.710155 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:18:11.731048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:18:11.741731 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:18:11.741790 kernel: AES CTR mode by8 optimization enabled Dec 13 01:18:11.741732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:18:11.743394 kernel: libata version 3.00 loaded. Dec 13 01:18:11.742094 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:18:11.745107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:18:11.751773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:18:11.754119 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (461) Dec 13 01:18:11.751898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:18:11.754129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:18:11.759072 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Dec 13 01:18:11.761393 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:18:11.781121 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:18:11.781148 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:18:11.781316 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:18:11.781468 kernel: scsi host0: ahci Dec 13 01:18:11.781641 kernel: scsi host1: ahci Dec 13 01:18:11.781801 kernel: scsi host2: ahci Dec 13 01:18:11.781955 kernel: scsi host3: ahci Dec 13 01:18:11.782139 kernel: scsi host4: ahci Dec 13 01:18:11.782290 kernel: scsi host5: ahci Dec 13 01:18:11.782446 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:18:11.782459 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:18:11.782471 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:18:11.782482 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:18:11.782494 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:18:11.782505 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:18:11.763023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:18:11.781131 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:18:11.823534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:18:11.829215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:18:11.834194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:18:11.834274 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:18:11.842158 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:18:11.853225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:18:11.855119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:18:11.873280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:18:11.930192 disk-uuid[554]: Primary Header is updated. Dec 13 01:18:11.930192 disk-uuid[554]: Secondary Entries is updated. Dec 13 01:18:11.930192 disk-uuid[554]: Secondary Header is updated. Dec 13 01:18:11.934059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:18:11.938058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:18:12.092905 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:18:12.093000 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:18:12.093029 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:18:12.094041 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:18:12.095032 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:18:12.096035 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:18:12.096056 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:18:12.097124 kernel: ata3.00: applying bridge limits Dec 13 01:18:12.098041 kernel: ata3.00: configured for UDMA/100 Dec 13 01:18:12.098061 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:18:12.146048 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:18:12.159708 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:18:12.159723 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:18:12.940061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:18:12.940211 disk-uuid[564]: The operation has completed successfully. Dec 13 01:18:12.970400 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:18:12.970530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:18:12.990217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:18:12.995807 sh[592]: Success Dec 13 01:18:13.009036 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:18:13.045898 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:18:13.063611 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:18:13.066782 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:18:13.077366 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:18:13.077392 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:18:13.077403 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:18:13.078383 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:18:13.080030 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:18:13.084544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:18:13.084794 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:18:13.095174 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:18:13.097002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:18:13.106122 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:18:13.106149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:18:13.106160 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:18:13.109044 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:18:13.118642 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:18:13.120640 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:18:13.129141 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:18:13.135158 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:18:13.224025 ignition[680]: Ignition 2.19.0 Dec 13 01:18:13.224037 ignition[680]: Stage: fetch-offline Dec 13 01:18:13.224079 ignition[680]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:13.224089 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:13.224184 ignition[680]: parsed url from cmdline: "" Dec 13 01:18:13.224189 ignition[680]: no config URL provided Dec 13 01:18:13.224194 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:18:13.224204 ignition[680]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:18:13.224231 ignition[680]: op(1): [started] loading QEMU firmware config module Dec 13 01:18:13.224237 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:18:13.233342 ignition[680]: op(1): [finished] loading QEMU firmware config module Dec 13 01:18:13.241089 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:18:13.274351 ignition[680]: parsing config with SHA512: 71fc1d43a2d08dde8dae11a79b0d6a011191a0b972b2d0a7bb7c4988485b2637ebb9e158b1230d2e44b3ab6ac60099ce42026c46f4a0be653e9a7aa3b6dfaaed Dec 13 01:18:13.278620 unknown[680]: fetched base config from "system" Dec 13 01:18:13.279394 unknown[680]: fetched user config from "qemu" Dec 13 01:18:13.280080 ignition[680]: fetch-offline: fetch-offline passed Dec 13 01:18:13.280210 ignition[680]: Ignition finished successfully Dec 13 01:18:13.292159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:18:13.292497 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:18:13.314273 systemd-networkd[781]: lo: Link UP Dec 13 01:18:13.314283 systemd-networkd[781]: lo: Gained carrier Dec 13 01:18:13.315844 systemd-networkd[781]: Enumeration completed Dec 13 01:18:13.315944 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:18:13.316275 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:18:13.316279 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:18:13.318068 systemd[1]: Reached target network.target - Network. Dec 13 01:18:13.318127 systemd-networkd[781]: eth0: Link UP Dec 13 01:18:13.318132 systemd-networkd[781]: eth0: Gained carrier Dec 13 01:18:13.318142 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:18:13.320055 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:18:13.332077 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:18:13.332131 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:18:13.349371 ignition[784]: Ignition 2.19.0 Dec 13 01:18:13.349384 ignition[784]: Stage: kargs Dec 13 01:18:13.349567 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:13.349579 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:13.350493 ignition[784]: kargs: kargs passed Dec 13 01:18:13.350537 ignition[784]: Ignition finished successfully Dec 13 01:18:13.355772 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:18:13.365225 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:18:13.383209 ignition[793]: Ignition 2.19.0 Dec 13 01:18:13.383219 ignition[793]: Stage: disks Dec 13 01:18:13.383394 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:13.383406 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:13.387136 ignition[793]: disks: disks passed Dec 13 01:18:13.387181 ignition[793]: Ignition finished successfully Dec 13 01:18:13.390809 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:18:13.392063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:18:13.393965 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:18:13.394347 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:18:13.394691 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:18:13.395032 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:18:13.416121 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:18:13.427853 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:18:13.434096 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:18:13.444093 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:18:13.531042 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:18:13.531694 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:18:13.533129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:18:13.545078 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:18:13.545875 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:18:13.546189 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:18:13.546224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:18:13.558837 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Dec 13 01:18:13.558853 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:18:13.558864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:18:13.558875 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:18:13.546244 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:18:13.562058 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:18:13.553649 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:18:13.559668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:18:13.563849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:18:13.607252 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:18:13.612642 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:18:13.616422 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:18:13.620465 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:18:13.703426 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:18:13.715112 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:18:13.716713 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:18:13.724032 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:18:13.741195 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:18:13.751534 ignition[927]: INFO : Ignition 2.19.0 Dec 13 01:18:13.751534 ignition[927]: INFO : Stage: mount Dec 13 01:18:13.753172 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:13.753172 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:13.755995 ignition[927]: INFO : mount: mount passed Dec 13 01:18:13.756795 ignition[927]: INFO : Ignition finished successfully Dec 13 01:18:13.759613 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:18:13.770085 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:18:14.076929 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:18:14.086261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:18:14.093904 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Dec 13 01:18:14.093932 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:18:14.093950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:18:14.095380 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:18:14.098031 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:18:14.099268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:18:14.121886 ignition[956]: INFO : Ignition 2.19.0 Dec 13 01:18:14.121886 ignition[956]: INFO : Stage: files Dec 13 01:18:14.123532 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:14.123532 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:14.126281 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:18:14.127898 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:18:14.127898 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:18:14.130892 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:18:14.132406 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:18:14.132406 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:18:14.131681 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 01:18:14.136130 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:18:14.136130 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:18:14.136130 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:18:14.136130 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:18:14.191105 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:18:14.305174 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:18:14.307244 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:18:14.307244 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:18:14.669521 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:18:14.767443 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:18:14.767443 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:18:14.771139 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:18:14.771139 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:18:14.774550 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:18:14.774550 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:18:14.778058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:18:14.796192 systemd-networkd[781]: eth0: Gained IPv6LL Dec 13 01:18:15.051913 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:18:15.402141 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:18:15.402141 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 01:18:15.406035 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:18:15.443545 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:18:15.449797 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:18:15.451554 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:18:15.451554 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:18:15.454434 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:18:15.455887 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:18:15.457701 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:18:15.459392 ignition[956]: INFO : files: files passed Dec 13 01:18:15.460142 ignition[956]: INFO : Ignition finished successfully Dec 13 01:18:15.463394 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:18:15.479135 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:18:15.482145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:18:15.483199 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:18:15.483317 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:18:15.511469 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:18:15.515189 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:18:15.515189 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:18:15.518337 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:18:15.522094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:18:15.524719 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:18:15.531145 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:18:15.564636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:18:15.564764 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:18:15.565888 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:18:15.568131 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:18:15.570147 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:18:15.570857 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:18:15.589432 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:18:15.599119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:18:15.608868 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:18:15.610151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:18:15.612518 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:18:15.614541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:18:15.614647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:18:15.617046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:18:15.618625 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:18:15.620711 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:18:15.622772 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:18:15.624852 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:18:15.627063 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:18:15.629203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:18:15.631513 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:18:15.633554 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:18:15.635757 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:18:15.637586 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:18:15.637694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:18:15.640200 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:18:15.641678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:18:15.643806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:18:15.643933 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:18:15.646108 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:18:15.646213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:18:15.648489 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:18:15.648596 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:18:15.650639 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:18:15.652411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:18:15.656118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:18:15.658278 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:18:15.660259 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:18:15.662085 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:18:15.662184 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:18:15.664156 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:18:15.664245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:18:15.666645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:18:15.666760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:18:15.668779 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:18:15.668883 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:18:15.682152 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:18:15.684683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:18:15.684802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:18:15.684914 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:18:15.685396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:18:15.685500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:18:15.689339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:18:15.689449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:18:15.699459 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 01:18:15.699459 ignition[1011]: INFO : Stage: umount Dec 13 01:18:15.701268 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:18:15.701268 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:18:15.701268 ignition[1011]: INFO : umount: umount passed Dec 13 01:18:15.701268 ignition[1011]: INFO : Ignition finished successfully Dec 13 01:18:15.702821 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:18:15.702960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:18:15.705212 systemd[1]: Stopped target network.target - Network. Dec 13 01:18:15.706363 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:18:15.706422 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:18:15.708312 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:18:15.708360 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:18:15.710312 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:18:15.710359 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:18:15.712575 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:18:15.712638 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:18:15.714645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:18:15.716749 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:18:15.719650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:18:15.724049 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 13 01:18:15.726825 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:18:15.726961 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:18:15.728673 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:18:15.728716 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:18:15.734349 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:18:15.736214 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:18:15.736266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:18:15.738618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:18:15.740940 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:18:15.741070 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:18:15.746760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:18:15.746846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:18:15.750922 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:18:15.750983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:18:15.751174 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:18:15.751219 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:18:15.755122 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:18:15.755240 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:18:15.769778 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:18:15.769959 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:18:15.772398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:18:15.772450 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:18:15.774754 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:18:15.774794 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:18:15.776975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:18:15.777034 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:18:15.779331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:18:15.779378 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:18:15.781519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:18:15.781566 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:18:15.790168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:18:15.791926 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:18:15.791983 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:18:15.794562 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:18:15.794614 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:18:15.797090 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:18:15.797139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:18:15.799873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:18:15.799921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:18:15.802720 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:18:15.802828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:18:15.915491 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:18:15.915624 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:18:15.917871 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:18:15.919773 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:18:15.919823 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:18:15.930160 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:18:15.936812 systemd[1]: Switching root. Dec 13 01:18:15.969165 systemd-journald[193]: Journal stopped Dec 13 01:18:17.149205 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:18:17.149269 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:18:17.149289 kernel: SELinux: policy capability open_perms=1 Dec 13 01:18:17.149300 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:18:17.149311 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:18:17.149326 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:18:17.149338 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:18:17.149349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:18:17.149370 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:18:17.149381 kernel: audit: type=1403 audit(1734052696.452:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:18:17.149396 systemd[1]: Successfully loaded SELinux policy in 38.652ms. Dec 13 01:18:17.149420 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.344ms. Dec 13 01:18:17.149433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:18:17.149445 systemd[1]: Detected virtualization kvm. Dec 13 01:18:17.149465 systemd[1]: Detected architecture x86-64. Dec 13 01:18:17.149477 systemd[1]: Detected first boot. Dec 13 01:18:17.149490 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:18:17.149503 zram_generator::config[1073]: No configuration found. Dec 13 01:18:17.149516 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:18:17.149532 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:18:17.149554 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:18:17.149567 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:18:17.149579 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:18:17.149590 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:18:17.149602 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:18:17.149614 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:18:17.149627 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:18:17.149641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:18:17.149653 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:18:17.149665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:18:17.149677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:18:17.149689 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:18:17.149701 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:18:17.149713 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:18:17.149725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:18:17.149737 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:18:17.149751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:18:17.149763 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:18:17.149775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:18:17.149792 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:18:17.149804 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:18:17.149823 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:18:17.149835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:18:17.149848 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:18:17.149863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:18:17.149875 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:18:17.149888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:18:17.149900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:18:17.149912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:18:17.152069 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:18:17.152087 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:18:17.152099 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:18:17.152111 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:18:17.152127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:17.152140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:18:17.152152 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:18:17.152163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:18:17.152175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:18:17.152187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:18:17.152199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:18:17.152211 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:18:17.152223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:18:17.152238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:18:17.152250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:18:17.152261 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:18:17.152273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:18:17.152285 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:18:17.152297 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:18:17.152310 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:18:17.152322 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:18:17.152336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:18:17.152348 kernel: fuse: init (API version 7.39) Dec 13 01:18:17.152360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:18:17.152371 kernel: loop: module loaded Dec 13 01:18:17.152383 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:18:17.152396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:18:17.152426 systemd-journald[1158]: Collecting audit messages is disabled. Dec 13 01:18:17.152459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:17.152471 systemd-journald[1158]: Journal started Dec 13 01:18:17.152493 systemd-journald[1158]: Runtime Journal (/run/log/journal/3f92efba5658491f8f1e7b98768077bb) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:18:17.156363 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:18:17.160666 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:18:17.161883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:18:17.163178 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:18:17.164274 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:18:17.165480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:18:17.166712 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:18:17.168149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:18:17.170458 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:18:17.170680 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:18:17.172194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:18:17.172404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:18:17.173883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:18:17.174116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:18:17.176445 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:18:17.176700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:18:17.178285 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:18:17.178516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:18:17.180045 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:18:17.181862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:18:17.183744 kernel: ACPI: bus type drm_connector registered Dec 13 01:18:17.185376 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:18:17.187173 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:18:17.187395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:18:17.188986 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:18:17.204051 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:18:17.214125 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:18:17.216609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:18:17.217790 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:18:17.221114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:18:17.224741 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:18:17.227173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:18:17.231129 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:18:17.233575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:18:17.238166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:18:17.239560 systemd-journald[1158]: Time spent on flushing to /var/log/journal/3f92efba5658491f8f1e7b98768077bb is 12.650ms for 944 entries. Dec 13 01:18:17.239560 systemd-journald[1158]: System Journal (/var/log/journal/3f92efba5658491f8f1e7b98768077bb) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:18:17.324689 systemd-journald[1158]: Received client request to flush runtime journal. Dec 13 01:18:17.242377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:18:17.246718 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:18:17.250257 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:18:17.253944 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:18:17.261134 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:18:17.309173 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:18:17.324641 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:18:17.326773 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:18:17.332272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:18:17.341407 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 13 01:18:17.341426 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 13 01:18:17.342529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:18:17.349575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:18:17.356230 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:18:17.379547 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:18:17.393126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:18:17.409706 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 01:18:17.409726 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 13 01:18:17.415423 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:18:17.913690 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:18:17.923318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:18:17.947880 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Dec 13 01:18:17.963332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:18:17.972148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:18:17.984146 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:18:17.998282 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:18:18.012219 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1241) Dec 13 01:18:18.014042 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1241) Dec 13 01:18:18.031035 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1250) Dec 13 01:18:18.034535 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:18:18.068118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:18:18.082032 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:18:18.082357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:18:18.100881 systemd-networkd[1242]: lo: Link UP Dec 13 01:18:18.101042 systemd-networkd[1242]: lo: Gained carrier Dec 13 01:18:18.103511 systemd-networkd[1242]: Enumeration completed Dec 13 01:18:18.104201 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:18:18.104738 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:18:18.105230 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:18:18.108497 systemd-networkd[1242]: eth0: Link UP Dec 13 01:18:18.108550 systemd-networkd[1242]: eth0: Gained carrier Dec 13 01:18:18.108609 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:18:18.113748 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:18:18.117067 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:18:18.129118 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:18:18.129298 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:18:18.129441 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:18:18.129668 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:18:18.142116 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:18:18.179284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:18:18.208243 kernel: kvm_amd: TSC scaling supported Dec 13 01:18:18.208275 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:18:18.208289 kernel: kvm_amd: Nested Paging enabled Dec 13 01:18:18.209514 kernel: kvm_amd: LBR virtualization supported Dec 13 01:18:18.209548 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:18:18.210268 kernel: kvm_amd: Virtual GIF supported Dec 13 01:18:18.231044 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:18:18.264557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:18:18.270772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:18:18.285301 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:18:18.294702 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:18:18.325325 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:18:18.326897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:18:18.339132 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:18:18.345685 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:18:18.379603 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:18:18.381366 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:18:18.382884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:18:18.382910 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:18:18.384110 systemd[1]: Reached target machines.target - Containers. Dec 13 01:18:18.386310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:18:18.399178 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:18:18.401674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:18:18.402867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:18:18.403819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:18:18.406272 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:18:18.408762 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:18:18.410705 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:18:18.420718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:18:18.426045 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:18:18.434155 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:18:18.434996 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:18:18.445035 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:18:18.473048 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:18:18.501036 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:18:18.539033 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:18:18.550037 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:18:18.556033 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:18:18.563393 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:18:18.563989 (sd-merge)[1308]: Merged extensions into '/usr'. Dec 13 01:18:18.568100 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:18:18.568117 systemd[1]: Reloading... Dec 13 01:18:18.617072 zram_generator::config[1336]: No configuration found. Dec 13 01:18:18.644115 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:18:18.747987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:18:18.811959 systemd[1]: Reloading finished in 243 ms. Dec 13 01:18:18.832187 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:18:18.833788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:18:18.847204 systemd[1]: Starting ensure-sysext.service... Dec 13 01:18:18.849393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:18:18.852900 systemd[1]: Reloading requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:18:18.852914 systemd[1]: Reloading... Dec 13 01:18:18.875255 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:18:18.875658 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:18:18.876682 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:18:18.876993 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 13 01:18:18.877250 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 13 01:18:18.880802 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:18:18.880817 systemd-tmpfiles[1381]: Skipping /boot Dec 13 01:18:18.895389 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:18:18.895408 systemd-tmpfiles[1381]: Skipping /boot Dec 13 01:18:18.900030 zram_generator::config[1409]: No configuration found. Dec 13 01:18:19.017733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:18:19.081069 systemd[1]: Reloading finished in 227 ms. Dec 13 01:18:19.099902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:18:19.116769 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:18:19.119811 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:18:19.122286 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:18:19.126168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:18:19.132470 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:18:19.137507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:19.137680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:18:19.139053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:18:19.141868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:18:19.146875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:18:19.149223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:18:19.149339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:19.150141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:18:19.150364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:18:19.150752 systemd-networkd[1242]: eth0: Gained IPv6LL Dec 13 01:18:19.157871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:18:19.159214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:18:19.161334 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:18:19.165661 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:18:19.167769 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:18:19.169752 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:18:19.170097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:18:19.179628 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:19.179908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:18:19.186253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:18:19.187200 augenrules[1492]: No rules Dec 13 01:18:19.191207 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:18:19.199444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:18:19.202913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:18:19.205205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:18:19.208247 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:18:19.209349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:18:19.210672 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:18:19.212676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:18:19.214462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:18:19.214671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:18:19.216579 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:18:19.216793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:18:19.218482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:18:19.218685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:18:19.220513 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:18:19.220746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:18:19.222986 systemd-resolved[1459]: Positive Trust Anchors: Dec 13 01:18:19.223005 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:18:19.223056 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:18:19.226213 systemd[1]: Finished ensure-sysext.service. Dec 13 01:18:19.227673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:18:19.227852 systemd-resolved[1459]: Defaulting to hostname 'linux'. Dec 13 01:18:19.230544 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:18:19.236190 systemd[1]: Reached target network.target - Network. Dec 13 01:18:19.237244 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:18:19.238363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:18:19.239651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:18:19.239712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:18:19.253158 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:18:19.254302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:18:19.312976 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:18:19.313784 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:18:19.313826 systemd-timesyncd[1518]: Initial clock synchronization to Fri 2024-12-13 01:18:19.088111 UTC. Dec 13 01:18:19.314601 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:18:19.315791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:18:19.317107 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:18:19.318432 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:18:19.319725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:18:19.319750 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:18:19.320696 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:18:19.321929 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:18:19.323147 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:18:19.324430 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:18:19.326130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:18:19.329035 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:18:19.331283 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:18:19.340229 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:18:19.341361 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:18:19.342346 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:18:19.343468 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:18:19.343504 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:18:19.343533 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:18:19.344811 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:18:19.346980 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:18:19.349098 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:18:19.352105 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:18:19.357229 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:18:19.358476 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:18:19.360658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:19.364058 jq[1526]: false Dec 13 01:18:19.366528 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:18:19.370147 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:18:19.375094 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:18:19.376386 dbus-daemon[1524]: [system] SELinux support is enabled Dec 13 01:18:19.378395 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:18:19.383055 extend-filesystems[1528]: Found loop3 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found loop4 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found loop5 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found sr0 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda1 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda2 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda3 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found usr Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda4 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda6 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda7 Dec 13 01:18:19.383055 extend-filesystems[1528]: Found vda9 Dec 13 01:18:19.383055 extend-filesystems[1528]: Checking size of /dev/vda9 Dec 13 01:18:19.401843 extend-filesystems[1528]: Resized partition /dev/vda9 Dec 13 01:18:19.384341 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:18:19.402173 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:18:19.394840 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:18:19.402807 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:18:19.404724 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:18:19.406049 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:18:19.409396 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:18:19.410261 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:18:19.412023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1245) Dec 13 01:18:19.417440 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:18:19.419910 jq[1559]: true Dec 13 01:18:19.420117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:18:19.422915 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:18:19.423266 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:18:19.427066 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:18:19.437591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:18:19.437905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:18:19.442427 update_engine[1558]: I20241213 01:18:19.442344 1558 main.cc:92] Flatcar Update Engine starting Dec 13 01:18:19.445023 update_engine[1558]: I20241213 01:18:19.443653 1558 update_check_scheduler.cc:74] Next update check in 5m9s Dec 13 01:18:19.450868 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:18:19.462445 jq[1570]: true Dec 13 01:18:19.463242 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:18:19.467742 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:18:19.468104 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:18:19.481240 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:18:19.481240 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:18:19.481240 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:18:19.489906 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Dec 13 01:18:19.485101 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:18:19.485444 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:18:19.502740 tar[1569]: linux-amd64/helm Dec 13 01:18:19.510131 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:18:19.511521 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:18:19.511624 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:18:19.511645 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:18:19.512934 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:18:19.512948 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:18:19.514842 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:18:19.516137 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:18:19.516176 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:18:19.517037 systemd-logind[1549]: New seat seat0. Dec 13 01:18:19.522193 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:18:19.523491 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:18:19.547664 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:18:19.550578 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:18:19.552787 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:18:19.574398 locksmithd[1605]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:18:19.628789 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:18:19.653372 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:18:19.664330 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:18:19.672451 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:18:19.672802 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:18:19.683230 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:18:19.695585 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:18:19.708409 containerd[1574]: time="2024-12-13T01:18:19.708324596Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:18:19.710340 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:18:19.714546 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:18:19.717653 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:18:19.732536 containerd[1574]: time="2024-12-13T01:18:19.732499983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734066 containerd[1574]: time="2024-12-13T01:18:19.734033950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734066 containerd[1574]: time="2024-12-13T01:18:19.734064397Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:18:19.734118 containerd[1574]: time="2024-12-13T01:18:19.734081109Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:18:19.734364 containerd[1574]: time="2024-12-13T01:18:19.734272267Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:18:19.734364 containerd[1574]: time="2024-12-13T01:18:19.734292936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734364 containerd[1574]: time="2024-12-13T01:18:19.734358970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734449 containerd[1574]: time="2024-12-13T01:18:19.734373006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734849 containerd[1574]: time="2024-12-13T01:18:19.734632703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734849 containerd[1574]: time="2024-12-13T01:18:19.734652400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734849 containerd[1574]: time="2024-12-13T01:18:19.734665925Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734849 containerd[1574]: time="2024-12-13T01:18:19.734675293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.734849 containerd[1574]: time="2024-12-13T01:18:19.734766734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.735183 containerd[1574]: time="2024-12-13T01:18:19.734989652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:18:19.735225 containerd[1574]: time="2024-12-13T01:18:19.735181522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:18:19.735225 containerd[1574]: time="2024-12-13T01:18:19.735196490Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:18:19.735314 containerd[1574]: time="2024-12-13T01:18:19.735293703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:18:19.735373 containerd[1574]: time="2024-12-13T01:18:19.735355448Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:18:19.741344 containerd[1574]: time="2024-12-13T01:18:19.741318571Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:18:19.741384 containerd[1574]: time="2024-12-13T01:18:19.741358997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:18:19.741384 containerd[1574]: time="2024-12-13T01:18:19.741374877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:18:19.741441 containerd[1574]: time="2024-12-13T01:18:19.741391689Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:18:19.741441 containerd[1574]: time="2024-12-13T01:18:19.741414652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:18:19.741861 containerd[1574]: time="2024-12-13T01:18:19.741535909Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:18:19.741861 containerd[1574]: time="2024-12-13T01:18:19.741812397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:18:19.741937 containerd[1574]: time="2024-12-13T01:18:19.741914439Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:18:19.741965 containerd[1574]: time="2024-12-13T01:18:19.741936600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:18:19.741965 containerd[1574]: time="2024-12-13T01:18:19.741950597Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:18:19.742022 containerd[1574]: time="2024-12-13T01:18:19.741981555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742022 containerd[1574]: time="2024-12-13T01:18:19.741996753Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742065 containerd[1574]: time="2024-12-13T01:18:19.742029204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742065 containerd[1574]: time="2024-12-13T01:18:19.742045925Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742065 containerd[1574]: time="2024-12-13T01:18:19.742060964Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742123 containerd[1574]: time="2024-12-13T01:18:19.742074469Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742123 containerd[1574]: time="2024-12-13T01:18:19.742088495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742123 containerd[1574]: time="2024-12-13T01:18:19.742099796Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:18:19.742123 containerd[1574]: time="2024-12-13T01:18:19.742119924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742190 containerd[1574]: time="2024-12-13T01:18:19.742135363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742190 containerd[1574]: time="2024-12-13T01:18:19.742148087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742190 containerd[1574]: time="2024-12-13T01:18:19.742161352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742190 containerd[1574]: time="2024-12-13T01:18:19.742174146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742190 containerd[1574]: time="2024-12-13T01:18:19.742187100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742200065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742213650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742227215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742249367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742261450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742284 containerd[1574]: time="2024-12-13T01:18:19.742273953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742287368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742303949Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742324448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742336731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742348112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:18:19.742410 containerd[1574]: time="2024-12-13T01:18:19.742410269Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742427361Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742438872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742451857Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742462116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742473828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742484157Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:18:19.742529 containerd[1574]: time="2024-12-13T01:18:19.742495118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:18:19.742791 containerd[1574]: time="2024-12-13T01:18:19.742736591Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:18:19.742791 containerd[1574]: time="2024-12-13T01:18:19.742788929Z" level=info msg="Connect containerd service" Dec 13 01:18:19.742938 containerd[1574]: time="2024-12-13T01:18:19.742818835Z" level=info msg="using legacy CRI server" Dec 13 01:18:19.742938 containerd[1574]: time="2024-12-13T01:18:19.742826008Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:18:19.742938 containerd[1574]: time="2024-12-13T01:18:19.742903514Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743414482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743577007Z" level=info msg="Start subscribing containerd event" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743647168Z" level=info msg="Start recovering state" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743745192Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743753237Z" level=info msg="Start event monitor" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743779176Z" level=info msg="Start snapshots syncer" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743789666Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743799213Z" level=info msg="Start streaming server" Dec 13 01:18:19.743956 containerd[1574]: time="2024-12-13T01:18:19.743800486Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:18:19.744001 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:18:19.745670 containerd[1574]: time="2024-12-13T01:18:19.745639124Z" level=info msg="containerd successfully booted in 0.039818s" Dec 13 01:18:19.900865 tar[1569]: linux-amd64/LICENSE Dec 13 01:18:19.900933 tar[1569]: linux-amd64/README.md Dec 13 01:18:19.913779 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:18:20.116745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:20.118329 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:18:20.119621 systemd[1]: Startup finished in 6.729s (kernel) + 3.704s (userspace) = 10.434s. Dec 13 01:18:20.147482 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:18:20.582376 kubelet[1657]: E1213 01:18:20.582270 1657 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:18:20.586894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:18:20.587198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:18:28.165978 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:18:28.177213 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:54978.service - OpenSSH per-connection server daemon (10.0.0.1:54978). Dec 13 01:18:28.215533 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 54978 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.217436 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.225625 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:18:28.240201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:18:28.241852 systemd-logind[1549]: New session 1 of user core. Dec 13 01:18:28.253536 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:18:28.256148 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:18:28.264649 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:18:28.382377 systemd[1676]: Queued start job for default target default.target. Dec 13 01:18:28.382790 systemd[1676]: Created slice app.slice - User Application Slice. Dec 13 01:18:28.382809 systemd[1676]: Reached target paths.target - Paths. Dec 13 01:18:28.382822 systemd[1676]: Reached target timers.target - Timers. Dec 13 01:18:28.389159 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:18:28.397549 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:18:28.397614 systemd[1676]: Reached target sockets.target - Sockets. Dec 13 01:18:28.397627 systemd[1676]: Reached target basic.target - Basic System. Dec 13 01:18:28.397663 systemd[1676]: Reached target default.target - Main User Target. Dec 13 01:18:28.397694 systemd[1676]: Startup finished in 126ms. Dec 13 01:18:28.398440 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:18:28.399909 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:18:28.463332 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:54980.service - OpenSSH per-connection server daemon (10.0.0.1:54980). Dec 13 01:18:28.495729 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 54980 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.497343 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.501819 systemd-logind[1549]: New session 2 of user core. Dec 13 01:18:28.511261 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:18:28.566392 sshd[1689]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:28.580214 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:54982.service - OpenSSH per-connection server daemon (10.0.0.1:54982). Dec 13 01:18:28.580658 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:54980.service: Deactivated successfully. Dec 13 01:18:28.582943 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:18:28.583833 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:18:28.584808 systemd-logind[1549]: Removed session 2. Dec 13 01:18:28.608340 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 54982 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.609759 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.613355 systemd-logind[1549]: New session 3 of user core. Dec 13 01:18:28.623244 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:18:28.671085 sshd[1694]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:28.679209 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:54992.service - OpenSSH per-connection server daemon (10.0.0.1:54992). Dec 13 01:18:28.679650 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:54982.service: Deactivated successfully. Dec 13 01:18:28.682112 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:18:28.683206 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:18:28.684187 systemd-logind[1549]: Removed session 3. Dec 13 01:18:28.706867 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 54992 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.708236 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.711835 systemd-logind[1549]: New session 4 of user core. Dec 13 01:18:28.721259 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:18:28.773501 sshd[1702]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:28.790225 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:54996.service - OpenSSH per-connection server daemon (10.0.0.1:54996). Dec 13 01:18:28.790690 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:54992.service: Deactivated successfully. Dec 13 01:18:28.793232 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:18:28.794376 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:18:28.794988 systemd-logind[1549]: Removed session 4. Dec 13 01:18:28.817709 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 54996 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.819167 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.822652 systemd-logind[1549]: New session 5 of user core. Dec 13 01:18:28.832246 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:18:28.889555 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:18:28.889901 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:18:28.903514 sudo[1717]: pam_unix(sudo:session): session closed for user root Dec 13 01:18:28.905704 sshd[1710]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:28.916217 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:55002.service - OpenSSH per-connection server daemon (10.0.0.1:55002). Dec 13 01:18:28.916692 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:54996.service: Deactivated successfully. Dec 13 01:18:28.918996 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:18:28.920178 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:18:28.921288 systemd-logind[1549]: Removed session 5. Dec 13 01:18:28.944028 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 55002 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:28.945334 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:28.949189 systemd-logind[1549]: New session 6 of user core. Dec 13 01:18:28.959244 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:18:29.011855 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:18:29.012220 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:18:29.015904 sudo[1727]: pam_unix(sudo:session): session closed for user root Dec 13 01:18:29.022030 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:18:29.022449 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:18:29.040217 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:18:29.041903 auditctl[1730]: No rules Dec 13 01:18:29.043299 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:18:29.043648 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:18:29.045627 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:18:29.076090 augenrules[1749]: No rules Dec 13 01:18:29.077853 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:18:29.079192 sudo[1726]: pam_unix(sudo:session): session closed for user root Dec 13 01:18:29.080976 sshd[1719]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:29.091341 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Dec 13 01:18:29.091785 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:55002.service: Deactivated successfully. Dec 13 01:18:29.094266 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:18:29.095184 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:18:29.096153 systemd-logind[1549]: Removed session 6. Dec 13 01:18:29.119117 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:29.120762 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:29.124537 systemd-logind[1549]: New session 7 of user core. Dec 13 01:18:29.138253 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:18:29.190972 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:18:29.191340 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:18:29.645218 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:18:29.645541 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:18:30.347856 dockerd[1781]: time="2024-12-13T01:18:30.347796419Z" level=info msg="Starting up" Dec 13 01:18:30.837385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:18:30.847163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:30.993796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:30.999381 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:18:31.160749 kubelet[1817]: E1213 01:18:31.160565 1817 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:18:31.168260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:18:31.168545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:18:31.210894 dockerd[1781]: time="2024-12-13T01:18:31.210824604Z" level=info msg="Loading containers: start." Dec 13 01:18:31.316035 kernel: Initializing XFRM netlink socket Dec 13 01:18:31.391681 systemd-networkd[1242]: docker0: Link UP Dec 13 01:18:31.413850 dockerd[1781]: time="2024-12-13T01:18:31.413765621Z" level=info msg="Loading containers: done." Dec 13 01:18:31.428426 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3614465304-merged.mount: Deactivated successfully. Dec 13 01:18:31.428972 dockerd[1781]: time="2024-12-13T01:18:31.428929177Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:18:31.429135 dockerd[1781]: time="2024-12-13T01:18:31.429105667Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:18:31.429237 dockerd[1781]: time="2024-12-13T01:18:31.429219329Z" level=info msg="Daemon has completed initialization" Dec 13 01:18:31.467951 dockerd[1781]: time="2024-12-13T01:18:31.467871948Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:18:31.468209 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:18:32.171191 containerd[1574]: time="2024-12-13T01:18:32.171141852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:18:32.784666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472630520.mount: Deactivated successfully. Dec 13 01:18:33.787265 containerd[1574]: time="2024-12-13T01:18:33.787208885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:33.788101 containerd[1574]: time="2024-12-13T01:18:33.788028137Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:18:33.789414 containerd[1574]: time="2024-12-13T01:18:33.789365697Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:33.792051 containerd[1574]: time="2024-12-13T01:18:33.791999835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:33.793116 containerd[1574]: time="2024-12-13T01:18:33.793070436Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.621889367s" Dec 13 01:18:33.793116 containerd[1574]: time="2024-12-13T01:18:33.793105985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:18:33.815433 containerd[1574]: time="2024-12-13T01:18:33.815392999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:18:35.313837 containerd[1574]: time="2024-12-13T01:18:35.313768374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:35.314542 containerd[1574]: time="2024-12-13T01:18:35.314509458Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:18:35.315663 containerd[1574]: time="2024-12-13T01:18:35.315633630Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:35.318516 containerd[1574]: time="2024-12-13T01:18:35.318454206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:35.319546 containerd[1574]: time="2024-12-13T01:18:35.319503015Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.504077327s" Dec 13 01:18:35.319586 containerd[1574]: time="2024-12-13T01:18:35.319546920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:18:35.343290 containerd[1574]: time="2024-12-13T01:18:35.343252358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:18:36.534178 containerd[1574]: time="2024-12-13T01:18:36.534120948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:36.534963 containerd[1574]: time="2024-12-13T01:18:36.534874447Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:18:36.536219 containerd[1574]: time="2024-12-13T01:18:36.536185665Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:36.539106 containerd[1574]: time="2024-12-13T01:18:36.539070166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:36.540216 containerd[1574]: time="2024-12-13T01:18:36.540183432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.196894498s" Dec 13 01:18:36.540277 containerd[1574]: time="2024-12-13T01:18:36.540219729Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:18:36.561733 containerd[1574]: time="2024-12-13T01:18:36.561710904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:18:37.567659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764974830.mount: Deactivated successfully. Dec 13 01:18:38.236875 containerd[1574]: time="2024-12-13T01:18:38.236789255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:38.237831 containerd[1574]: time="2024-12-13T01:18:38.237785203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:18:38.239084 containerd[1574]: time="2024-12-13T01:18:38.239056504Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:38.241307 containerd[1574]: time="2024-12-13T01:18:38.241258636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:38.241836 containerd[1574]: time="2024-12-13T01:18:38.241805277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.680067118s" Dec 13 01:18:38.241876 containerd[1574]: time="2024-12-13T01:18:38.241834797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:18:38.264792 containerd[1574]: time="2024-12-13T01:18:38.264741832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:18:38.778458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745177601.mount: Deactivated successfully. Dec 13 01:18:39.395243 containerd[1574]: time="2024-12-13T01:18:39.395191522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.396042 containerd[1574]: time="2024-12-13T01:18:39.395965176Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:18:39.397418 containerd[1574]: time="2024-12-13T01:18:39.397373606Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.400173 containerd[1574]: time="2024-12-13T01:18:39.400131013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.401090 containerd[1574]: time="2024-12-13T01:18:39.401034222Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.136246067s" Dec 13 01:18:39.401130 containerd[1574]: time="2024-12-13T01:18:39.401090952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:18:39.422772 containerd[1574]: time="2024-12-13T01:18:39.422653587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:18:39.918682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666950633.mount: Deactivated successfully. Dec 13 01:18:39.923624 containerd[1574]: time="2024-12-13T01:18:39.923587439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.924320 containerd[1574]: time="2024-12-13T01:18:39.924263679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:18:39.925512 containerd[1574]: time="2024-12-13T01:18:39.925482214Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.927542 containerd[1574]: time="2024-12-13T01:18:39.927506923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:39.928248 containerd[1574]: time="2024-12-13T01:18:39.928212242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 505.526149ms" Dec 13 01:18:39.928291 containerd[1574]: time="2024-12-13T01:18:39.928248089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:18:39.949369 containerd[1574]: time="2024-12-13T01:18:39.949334314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:18:40.473165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451322880.mount: Deactivated successfully. Dec 13 01:18:41.418685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:18:41.429147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:41.564710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:41.569320 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:18:41.644649 kubelet[2166]: E1213 01:18:41.644519 2166 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:18:41.649606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:18:41.649856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:18:42.398977 containerd[1574]: time="2024-12-13T01:18:42.398915260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:42.399905 containerd[1574]: time="2024-12-13T01:18:42.399839593Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:18:42.401225 containerd[1574]: time="2024-12-13T01:18:42.401179464Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:42.403809 containerd[1574]: time="2024-12-13T01:18:42.403778826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:18:42.405000 containerd[1574]: time="2024-12-13T01:18:42.404970281Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.455607421s" Dec 13 01:18:42.405054 containerd[1574]: time="2024-12-13T01:18:42.405002263Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:18:45.140916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:45.153195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:45.172991 systemd[1]: Reloading requested from client PID 2269 ('systemctl') (unit session-7.scope)... Dec 13 01:18:45.173005 systemd[1]: Reloading... Dec 13 01:18:45.255318 zram_generator::config[2311]: No configuration found. Dec 13 01:18:45.449096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:18:45.520455 systemd[1]: Reloading finished in 347 ms. Dec 13 01:18:45.574485 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:18:45.574586 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:18:45.574932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:45.576722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:45.716243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:45.720718 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:18:45.762901 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:18:45.762901 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:18:45.762901 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:18:45.763793 kubelet[2368]: I1213 01:18:45.763739 2368 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:18:45.982833 kubelet[2368]: I1213 01:18:45.982699 2368 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:18:45.982833 kubelet[2368]: I1213 01:18:45.982726 2368 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:18:45.982990 kubelet[2368]: I1213 01:18:45.982926 2368 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:18:45.998388 kubelet[2368]: E1213 01:18:45.998342 2368 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:45.999179 kubelet[2368]: I1213 01:18:45.999163 2368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:18:46.012816 kubelet[2368]: I1213 01:18:46.012789 2368 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:18:46.014001 kubelet[2368]: I1213 01:18:46.013974 2368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:18:46.014179 kubelet[2368]: I1213 01:18:46.014154 2368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:18:46.014266 kubelet[2368]: I1213 01:18:46.014184 2368 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:18:46.014266 kubelet[2368]: I1213 01:18:46.014194 2368 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:18:46.014312 kubelet[2368]: I1213 01:18:46.014296 2368 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:18:46.014409 kubelet[2368]: I1213 01:18:46.014387 2368 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:18:46.014409 kubelet[2368]: I1213 01:18:46.014404 2368 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:18:46.014460 kubelet[2368]: I1213 01:18:46.014443 2368 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:18:46.014460 kubelet[2368]: I1213 01:18:46.014459 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:18:46.015937 kubelet[2368]: I1213 01:18:46.015917 2368 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:18:46.016412 kubelet[2368]: W1213 01:18:46.016191 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.016412 kubelet[2368]: E1213 01:18:46.016248 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.016922 kubelet[2368]: W1213 01:18:46.016875 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.016922 kubelet[2368]: E1213 01:18:46.016925 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.018255 kubelet[2368]: I1213 01:18:46.018234 2368 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:18:46.019685 kubelet[2368]: W1213 01:18:46.019661 2368 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:18:46.020372 kubelet[2368]: I1213 01:18:46.020225 2368 server.go:1256] "Started kubelet" Dec 13 01:18:46.020975 kubelet[2368]: I1213 01:18:46.020443 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:18:46.020975 kubelet[2368]: I1213 01:18:46.020559 2368 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:18:46.020975 kubelet[2368]: I1213 01:18:46.020797 2368 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:18:46.021399 kubelet[2368]: I1213 01:18:46.021368 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:18:46.021399 kubelet[2368]: I1213 01:18:46.021404 2368 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:18:46.024119 kubelet[2368]: E1213 01:18:46.023468 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:18:46.024119 kubelet[2368]: I1213 01:18:46.023504 2368 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:18:46.024119 kubelet[2368]: I1213 01:18:46.023572 2368 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:18:46.024119 kubelet[2368]: I1213 01:18:46.023614 2368 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:18:46.024119 kubelet[2368]: W1213 01:18:46.023894 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.024119 kubelet[2368]: E1213 01:18:46.023929 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.025252 kubelet[2368]: I1213 01:18:46.024655 2368 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:18:46.025252 kubelet[2368]: I1213 01:18:46.024720 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:18:46.025252 kubelet[2368]: E1213 01:18:46.024788 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="200ms" Dec 13 01:18:46.025807 kubelet[2368]: E1213 01:18:46.025788 2368 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097c4543b7735 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:18:46.020208437 +0000 UTC m=+0.295532117,LastTimestamp:2024-12-13 01:18:46.020208437 +0000 UTC m=+0.295532117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:18:46.026082 kubelet[2368]: E1213 01:18:46.026059 2368 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:18:46.026082 kubelet[2368]: I1213 01:18:46.026074 2368 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:18:46.040403 kubelet[2368]: I1213 01:18:46.040372 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:18:46.041863 kubelet[2368]: I1213 01:18:46.041846 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:18:46.041910 kubelet[2368]: I1213 01:18:46.041883 2368 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:18:46.041910 kubelet[2368]: I1213 01:18:46.041904 2368 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:18:46.043775 kubelet[2368]: E1213 01:18:46.043093 2368 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:18:46.043775 kubelet[2368]: W1213 01:18:46.043493 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.043775 kubelet[2368]: E1213 01:18:46.043530 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.049450 kubelet[2368]: I1213 01:18:46.049382 2368 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:18:46.049450 kubelet[2368]: I1213 01:18:46.049398 2368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:18:46.049450 kubelet[2368]: I1213 01:18:46.049413 2368 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:18:46.124378 kubelet[2368]: I1213 01:18:46.124354 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:46.124686 kubelet[2368]: E1213 01:18:46.124659 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Dec 13 01:18:46.143898 kubelet[2368]: E1213 01:18:46.143865 2368 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:18:46.225396 kubelet[2368]: E1213 01:18:46.225368 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="400ms" Dec 13 01:18:46.290624 kubelet[2368]: I1213 01:18:46.290527 2368 policy_none.go:49] "None policy: Start" Dec 13 01:18:46.291447 kubelet[2368]: I1213 01:18:46.291423 2368 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:18:46.291447 kubelet[2368]: I1213 01:18:46.291448 2368 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:18:46.297932 kubelet[2368]: I1213 01:18:46.297902 2368 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:18:46.298191 kubelet[2368]: I1213 01:18:46.298169 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:18:46.299865 kubelet[2368]: E1213 01:18:46.299846 2368 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:18:46.325835 kubelet[2368]: I1213 01:18:46.325815 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:46.326099 kubelet[2368]: E1213 01:18:46.326078 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Dec 13 01:18:46.344188 kubelet[2368]: I1213 01:18:46.344158 2368 topology_manager.go:215] "Topology Admit Handler" podUID="4134eaa007e5c3b71a24793e5152836c" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:18:46.345098 kubelet[2368]: I1213 01:18:46.345076 2368 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:18:46.345896 kubelet[2368]: I1213 01:18:46.345859 2368 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:18:46.425765 kubelet[2368]: I1213 01:18:46.425743 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:46.425850 kubelet[2368]: I1213 01:18:46.425781 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:46.425850 kubelet[2368]: I1213 01:18:46.425801 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:46.425850 kubelet[2368]: I1213 01:18:46.425818 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:46.425850 kubelet[2368]: I1213 01:18:46.425838 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:18:46.425933 kubelet[2368]: I1213 01:18:46.425867 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:46.425933 kubelet[2368]: I1213 01:18:46.425919 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:46.426027 kubelet[2368]: I1213 01:18:46.425989 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:46.426052 kubelet[2368]: I1213 01:18:46.426041 2368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:46.626702 kubelet[2368]: E1213 01:18:46.626668 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="800ms" Dec 13 01:18:46.649957 kubelet[2368]: E1213 01:18:46.649926 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:46.650068 kubelet[2368]: E1213 01:18:46.650047 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:46.650461 containerd[1574]: time="2024-12-13T01:18:46.650418443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:46.650782 containerd[1574]: time="2024-12-13T01:18:46.650470195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4134eaa007e5c3b71a24793e5152836c,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:46.651631 kubelet[2368]: E1213 01:18:46.651615 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:46.651918 containerd[1574]: time="2024-12-13T01:18:46.651883418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:46.727005 kubelet[2368]: I1213 01:18:46.726979 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:46.727256 kubelet[2368]: E1213 01:18:46.727233 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Dec 13 01:18:46.827059 kubelet[2368]: W1213 01:18:46.826987 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:46.827059 kubelet[2368]: E1213 01:18:46.827057 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.169271 kubelet[2368]: W1213 01:18:47.169219 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.169271 kubelet[2368]: W1213 01:18:47.169254 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.169390 kubelet[2368]: E1213 01:18:47.169282 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.169390 kubelet[2368]: E1213 01:18:47.169285 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.191179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680258033.mount: Deactivated successfully. Dec 13 01:18:47.195306 containerd[1574]: time="2024-12-13T01:18:47.195267446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:18:47.196965 containerd[1574]: time="2024-12-13T01:18:47.196907716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:18:47.197819 containerd[1574]: time="2024-12-13T01:18:47.197779585Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:18:47.198618 containerd[1574]: time="2024-12-13T01:18:47.198584167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:18:47.199444 containerd[1574]: time="2024-12-13T01:18:47.199413407Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:18:47.200227 containerd[1574]: time="2024-12-13T01:18:47.200187206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:18:47.201025 containerd[1574]: time="2024-12-13T01:18:47.200979576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:18:47.202380 containerd[1574]: time="2024-12-13T01:18:47.202347704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:18:47.204194 containerd[1574]: time="2024-12-13T01:18:47.204163720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.20995ms" Dec 13 01:18:47.204761 containerd[1574]: time="2024-12-13T01:18:47.204733402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.23834ms" Dec 13 01:18:47.206868 containerd[1574]: time="2024-12-13T01:18:47.206843753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 556.308053ms" Dec 13 01:18:47.262719 kubelet[2368]: W1213 01:18:47.262645 2368 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.262719 kubelet[2368]: E1213 01:18:47.262711 2368 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Dec 13 01:18:47.336005 containerd[1574]: time="2024-12-13T01:18:47.335853976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:47.336005 containerd[1574]: time="2024-12-13T01:18:47.335918928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:47.336005 containerd[1574]: time="2024-12-13T01:18:47.335963738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.336680 containerd[1574]: time="2024-12-13T01:18:47.336585208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:47.336680 containerd[1574]: time="2024-12-13T01:18:47.336635563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.337723 containerd[1574]: time="2024-12-13T01:18:47.337613729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:47.337723 containerd[1574]: time="2024-12-13T01:18:47.337632580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.337905 containerd[1574]: time="2024-12-13T01:18:47.337740750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.340306 containerd[1574]: time="2024-12-13T01:18:47.339445281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:47.340306 containerd[1574]: time="2024-12-13T01:18:47.339492534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:47.340306 containerd[1574]: time="2024-12-13T01:18:47.339506409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.340306 containerd[1574]: time="2024-12-13T01:18:47.339583685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:47.391214 containerd[1574]: time="2024-12-13T01:18:47.391172707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee720af6da831355008ab8a250fab4a9ff0752457ff390bfb82c128adef44be5\"" Dec 13 01:18:47.392233 kubelet[2368]: E1213 01:18:47.392161 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:47.394363 containerd[1574]: time="2024-12-13T01:18:47.394337740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca77462ecc0810a40793e004d7fadd790ad152ea1fb2d6b1b6abdf6fd95746b\"" Dec 13 01:18:47.395562 containerd[1574]: time="2024-12-13T01:18:47.395502813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4134eaa007e5c3b71a24793e5152836c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2edd1541c5207222bded345b5411f12b8aebc0fc8d9cb9c86ca40d7f5e651e75\"" Dec 13 01:18:47.398047 containerd[1574]: time="2024-12-13T01:18:47.397867467Z" level=info msg="CreateContainer within sandbox \"ee720af6da831355008ab8a250fab4a9ff0752457ff390bfb82c128adef44be5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:18:47.398244 kubelet[2368]: E1213 01:18:47.398218 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:47.398578 kubelet[2368]: E1213 01:18:47.398564 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:47.401951 containerd[1574]: time="2024-12-13T01:18:47.401908181Z" level=info msg="CreateContainer within sandbox \"3ca77462ecc0810a40793e004d7fadd790ad152ea1fb2d6b1b6abdf6fd95746b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:18:47.402401 containerd[1574]: time="2024-12-13T01:18:47.402377232Z" level=info msg="CreateContainer within sandbox \"2edd1541c5207222bded345b5411f12b8aebc0fc8d9cb9c86ca40d7f5e651e75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:18:47.416869 containerd[1574]: time="2024-12-13T01:18:47.416831303Z" level=info msg="CreateContainer within sandbox \"ee720af6da831355008ab8a250fab4a9ff0752457ff390bfb82c128adef44be5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a82c121d0a2a9d19630ec72f9b61d664db6ca648457c0066347f7d9fb5b7dded\"" Dec 13 01:18:47.417300 containerd[1574]: time="2024-12-13T01:18:47.417271842Z" level=info msg="StartContainer for \"a82c121d0a2a9d19630ec72f9b61d664db6ca648457c0066347f7d9fb5b7dded\"" Dec 13 01:18:47.425345 containerd[1574]: time="2024-12-13T01:18:47.424645405Z" level=info msg="CreateContainer within sandbox \"2edd1541c5207222bded345b5411f12b8aebc0fc8d9cb9c86ca40d7f5e651e75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0dc3020e2f1ba78d2b5869c2f8da000b9c9620aac3181006039e3ec979f9a3ef\"" Dec 13 01:18:47.425345 containerd[1574]: time="2024-12-13T01:18:47.424953507Z" level=info msg="StartContainer for \"0dc3020e2f1ba78d2b5869c2f8da000b9c9620aac3181006039e3ec979f9a3ef\"" Dec 13 01:18:47.427076 kubelet[2368]: E1213 01:18:47.427054 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="1.6s" Dec 13 01:18:47.431251 containerd[1574]: time="2024-12-13T01:18:47.430366023Z" level=info msg="CreateContainer within sandbox \"3ca77462ecc0810a40793e004d7fadd790ad152ea1fb2d6b1b6abdf6fd95746b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dbab9e3f33a8e7160332f3e32616c0608570b6926a1fff6c43784eaad4ca8885\"" Dec 13 01:18:47.431660 containerd[1574]: time="2024-12-13T01:18:47.431640618Z" level=info msg="StartContainer for \"dbab9e3f33a8e7160332f3e32616c0608570b6926a1fff6c43784eaad4ca8885\"" Dec 13 01:18:47.482344 containerd[1574]: time="2024-12-13T01:18:47.482283830Z" level=info msg="StartContainer for \"a82c121d0a2a9d19630ec72f9b61d664db6ca648457c0066347f7d9fb5b7dded\" returns successfully" Dec 13 01:18:47.500109 containerd[1574]: time="2024-12-13T01:18:47.500064133Z" level=info msg="StartContainer for \"0dc3020e2f1ba78d2b5869c2f8da000b9c9620aac3181006039e3ec979f9a3ef\" returns successfully" Dec 13 01:18:47.500244 containerd[1574]: time="2024-12-13T01:18:47.500173645Z" level=info msg="StartContainer for \"dbab9e3f33a8e7160332f3e32616c0608570b6926a1fff6c43784eaad4ca8885\" returns successfully" Dec 13 01:18:47.529464 kubelet[2368]: I1213 01:18:47.529413 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:47.530027 kubelet[2368]: E1213 01:18:47.529991 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Dec 13 01:18:48.055339 kubelet[2368]: E1213 01:18:48.055169 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:48.059046 kubelet[2368]: E1213 01:18:48.058264 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:48.059769 kubelet[2368]: E1213 01:18:48.059759 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:48.583144 kubelet[2368]: E1213 01:18:48.583105 2368 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:18:48.928142 kubelet[2368]: E1213 01:18:48.928059 2368 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:18:49.029815 kubelet[2368]: E1213 01:18:49.029784 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:18:49.061643 kubelet[2368]: E1213 01:18:49.061625 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:49.131266 kubelet[2368]: I1213 01:18:49.131230 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:49.136750 kubelet[2368]: I1213 01:18:49.136720 2368 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:18:49.142280 kubelet[2368]: E1213 01:18:49.142257 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:18:49.242561 kubelet[2368]: E1213 01:18:49.242469 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:18:49.343034 kubelet[2368]: E1213 01:18:49.342979 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:18:49.443111 kubelet[2368]: E1213 01:18:49.443072 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:18:50.015961 kubelet[2368]: I1213 01:18:50.015924 2368 apiserver.go:52] "Watching apiserver" Dec 13 01:18:50.024547 kubelet[2368]: I1213 01:18:50.024495 2368 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:18:50.068155 kubelet[2368]: E1213 01:18:50.068122 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:50.804947 systemd[1]: Reloading requested from client PID 2644 ('systemctl') (unit session-7.scope)... Dec 13 01:18:50.804964 systemd[1]: Reloading... Dec 13 01:18:50.878068 zram_generator::config[2689]: No configuration found. Dec 13 01:18:50.991330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:18:51.064747 kubelet[2368]: E1213 01:18:51.064663 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:51.068781 systemd[1]: Reloading finished in 263 ms. Dec 13 01:18:51.103889 kubelet[2368]: I1213 01:18:51.103851 2368 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:18:51.103944 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:51.122508 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:18:51.122894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:51.136353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:18:51.268846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:18:51.273428 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:18:51.317412 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:18:51.317412 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:18:51.317412 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:18:51.317412 kubelet[2738]: I1213 01:18:51.317387 2738 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:18:51.322100 kubelet[2738]: I1213 01:18:51.322077 2738 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:18:51.322770 kubelet[2738]: I1213 01:18:51.322176 2738 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:18:51.322770 kubelet[2738]: I1213 01:18:51.322414 2738 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:18:51.324018 kubelet[2738]: I1213 01:18:51.323986 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:18:51.326159 kubelet[2738]: I1213 01:18:51.325999 2738 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:18:51.338150 sudo[2753]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:18:51.338524 sudo[2753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:18:51.339567 kubelet[2738]: I1213 01:18:51.339490 2738 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:18:51.340102 kubelet[2738]: I1213 01:18:51.340080 2738 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:18:51.340277 kubelet[2738]: I1213 01:18:51.340255 2738 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:18:51.340367 kubelet[2738]: I1213 01:18:51.340285 2738 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:18:51.340367 kubelet[2738]: I1213 01:18:51.340295 2738 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:18:51.340367 kubelet[2738]: I1213 01:18:51.340329 2738 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:18:51.340425 kubelet[2738]: I1213 01:18:51.340420 2738 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:18:51.340450 kubelet[2738]: I1213 01:18:51.340433 2738 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:18:51.340477 kubelet[2738]: I1213 01:18:51.340462 2738 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:18:51.340498 kubelet[2738]: I1213 01:18:51.340478 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:18:51.341427 kubelet[2738]: I1213 01:18:51.341397 2738 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:18:51.341764 kubelet[2738]: I1213 01:18:51.341753 2738 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:18:51.344002 kubelet[2738]: I1213 01:18:51.343981 2738 server.go:1256] "Started kubelet" Dec 13 01:18:51.344645 kubelet[2738]: I1213 01:18:51.344537 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:18:51.345036 kubelet[2738]: I1213 01:18:51.344734 2738 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:18:51.345840 kubelet[2738]: I1213 01:18:51.345822 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:18:51.346762 kubelet[2738]: I1213 01:18:51.346740 2738 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:18:51.347855 kubelet[2738]: I1213 01:18:51.347772 2738 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:18:51.351908 kubelet[2738]: I1213 01:18:51.351831 2738 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:18:51.352607 kubelet[2738]: I1213 01:18:51.352585 2738 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:18:51.353119 kubelet[2738]: I1213 01:18:51.353102 2738 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:18:51.356576 kubelet[2738]: I1213 01:18:51.356164 2738 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:18:51.356644 kubelet[2738]: I1213 01:18:51.356614 2738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:18:51.360820 kubelet[2738]: E1213 01:18:51.360354 2738 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:18:51.360820 kubelet[2738]: I1213 01:18:51.360522 2738 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:18:51.366646 kubelet[2738]: I1213 01:18:51.366619 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:18:51.370109 kubelet[2738]: I1213 01:18:51.370092 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:18:51.370192 kubelet[2738]: I1213 01:18:51.370118 2738 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:18:51.370192 kubelet[2738]: I1213 01:18:51.370136 2738 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:18:51.370241 kubelet[2738]: E1213 01:18:51.370196 2738 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:18:51.408291 kubelet[2738]: I1213 01:18:51.408255 2738 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:18:51.408291 kubelet[2738]: I1213 01:18:51.408282 2738 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:18:51.408291 kubelet[2738]: I1213 01:18:51.408299 2738 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:18:51.408461 kubelet[2738]: I1213 01:18:51.408431 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:18:51.408461 kubelet[2738]: I1213 01:18:51.408451 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:18:51.408461 kubelet[2738]: I1213 01:18:51.408457 2738 policy_none.go:49] "None policy: Start" Dec 13 01:18:51.408983 kubelet[2738]: I1213 01:18:51.408965 2738 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:18:51.409039 kubelet[2738]: I1213 01:18:51.408987 2738 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:18:51.409173 kubelet[2738]: I1213 01:18:51.409149 2738 state_mem.go:75] "Updated machine memory state" Dec 13 01:18:51.410673 kubelet[2738]: I1213 01:18:51.410593 2738 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:18:51.410837 kubelet[2738]: I1213 01:18:51.410820 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:18:51.458694 kubelet[2738]: I1213 01:18:51.458673 2738 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:18:51.465833 kubelet[2738]: I1213 01:18:51.465287 2738 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:18:51.465833 kubelet[2738]: I1213 01:18:51.465351 2738 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:18:51.470755 kubelet[2738]: I1213 01:18:51.470740 2738 topology_manager.go:215] "Topology Admit Handler" podUID="4134eaa007e5c3b71a24793e5152836c" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:18:51.470892 kubelet[2738]: I1213 01:18:51.470879 2738 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:18:51.471388 kubelet[2738]: I1213 01:18:51.471096 2738 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:18:51.479447 kubelet[2738]: E1213 01:18:51.479421 2738 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:51.555566 kubelet[2738]: I1213 01:18:51.555532 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:51.555566 kubelet[2738]: I1213 01:18:51.555583 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:51.555727 kubelet[2738]: I1213 01:18:51.555607 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:51.555727 kubelet[2738]: I1213 01:18:51.555627 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:51.555727 kubelet[2738]: I1213 01:18:51.555646 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4134eaa007e5c3b71a24793e5152836c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4134eaa007e5c3b71a24793e5152836c\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:51.555727 kubelet[2738]: I1213 01:18:51.555664 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:51.555727 kubelet[2738]: I1213 01:18:51.555680 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:51.555843 kubelet[2738]: I1213 01:18:51.555698 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:51.555843 kubelet[2738]: I1213 01:18:51.555716 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:18:51.779916 kubelet[2738]: E1213 01:18:51.779796 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:51.780560 kubelet[2738]: E1213 01:18:51.780495 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:51.780912 kubelet[2738]: E1213 01:18:51.780676 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:51.815424 sudo[2753]: pam_unix(sudo:session): session closed for user root Dec 13 01:18:52.341511 kubelet[2738]: I1213 01:18:52.341468 2738 apiserver.go:52] "Watching apiserver" Dec 13 01:18:52.352847 kubelet[2738]: I1213 01:18:52.352818 2738 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:18:52.380515 kubelet[2738]: E1213 01:18:52.380475 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.385902 kubelet[2738]: E1213 01:18:52.385865 2738 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:18:52.386375 kubelet[2738]: E1213 01:18:52.385981 2738 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:18:52.386545 kubelet[2738]: E1213 01:18:52.386485 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.386715 kubelet[2738]: E1213 01:18:52.386547 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:52.414259 kubelet[2738]: I1213 01:18:52.414040 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.404226171 podStartE2EDuration="2.404226171s" podCreationTimestamp="2024-12-13 01:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:52.402592497 +0000 UTC m=+1.125118894" watchObservedRunningTime="2024-12-13 01:18:52.404226171 +0000 UTC m=+1.126752568" Dec 13 01:18:52.431752 kubelet[2738]: I1213 01:18:52.431674 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4316318319999999 podStartE2EDuration="1.431631832s" podCreationTimestamp="2024-12-13 01:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:52.43155527 +0000 UTC m=+1.154081667" watchObservedRunningTime="2024-12-13 01:18:52.431631832 +0000 UTC m=+1.154158229" Dec 13 01:18:52.431752 kubelet[2738]: I1213 01:18:52.431760 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.431743145 podStartE2EDuration="1.431743145s" podCreationTimestamp="2024-12-13 01:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:52.420812043 +0000 UTC m=+1.143338440" watchObservedRunningTime="2024-12-13 01:18:52.431743145 +0000 UTC m=+1.154269552" Dec 13 01:18:53.198025 sudo[1762]: pam_unix(sudo:session): session closed for user root Dec 13 01:18:53.199822 sshd[1755]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:53.204560 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:55016.service: Deactivated successfully. Dec 13 01:18:53.207087 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:18:53.207847 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:18:53.208693 systemd-logind[1549]: Removed session 7. Dec 13 01:18:53.381844 kubelet[2738]: E1213 01:18:53.381825 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:53.382163 kubelet[2738]: E1213 01:18:53.382072 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:53.750433 kubelet[2738]: E1213 01:18:53.750413 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:54.523035 kubelet[2738]: E1213 01:18:54.522989 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:55.935824 kubelet[2738]: E1213 01:18:55.935785 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:03.754248 kubelet[2738]: E1213 01:19:03.754215 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:04.526641 kubelet[2738]: E1213 01:19:04.526588 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:04.686132 update_engine[1558]: I20241213 01:19:04.686072 1558 update_attempter.cc:509] Updating boot flags... Dec 13 01:19:04.712036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2823) Dec 13 01:19:04.735113 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2826) Dec 13 01:19:04.755043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2826) Dec 13 01:19:05.414303 kubelet[2738]: I1213 01:19:05.414271 2738 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:19:05.414732 containerd[1574]: time="2024-12-13T01:19:05.414614549Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:19:05.414981 kubelet[2738]: I1213 01:19:05.414789 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:19:05.939701 kubelet[2738]: E1213 01:19:05.939680 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.488354 kubelet[2738]: I1213 01:19:06.488317 2738 topology_manager.go:215] "Topology Admit Handler" podUID="ebd3e8fc-3247-40b2-a454-f5def47b9fe5" podNamespace="kube-system" podName="kube-proxy-fmjrj" Dec 13 01:19:06.492310 kubelet[2738]: I1213 01:19:06.492271 2738 topology_manager.go:215] "Topology Admit Handler" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" podNamespace="kube-system" podName="cilium-9dgzq" Dec 13 01:19:06.551291 kubelet[2738]: I1213 01:19:06.551247 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63263af-e197-442c-b5a6-6b92d26110b3-clustermesh-secrets\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551291 kubelet[2738]: I1213 01:19:06.551289 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-hubble-tls\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551312 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebd3e8fc-3247-40b2-a454-f5def47b9fe5-lib-modules\") pod \"kube-proxy-fmjrj\" (UID: \"ebd3e8fc-3247-40b2-a454-f5def47b9fe5\") " pod="kube-system/kube-proxy-fmjrj" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551333 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-run\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551357 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-cgroup\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551378 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-lib-modules\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551431 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-etc-cni-netd\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551462 kubelet[2738]: I1213 01:19:06.551464 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-hostproc\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551481 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-xtables-lock\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551498 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-bpf-maps\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551518 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-net\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551544 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3e8fc-3247-40b2-a454-f5def47b9fe5-xtables-lock\") pod \"kube-proxy-fmjrj\" (UID: \"ebd3e8fc-3247-40b2-a454-f5def47b9fe5\") " pod="kube-system/kube-proxy-fmjrj" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551561 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cni-path\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551593 kubelet[2738]: I1213 01:19:06.551578 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-config-path\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551723 kubelet[2738]: I1213 01:19:06.551595 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-kernel\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551723 kubelet[2738]: I1213 01:19:06.551615 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48js7\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-kube-api-access-48js7\") pod \"cilium-9dgzq\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " pod="kube-system/cilium-9dgzq" Dec 13 01:19:06.551723 kubelet[2738]: I1213 01:19:06.551634 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebd3e8fc-3247-40b2-a454-f5def47b9fe5-kube-proxy\") pod \"kube-proxy-fmjrj\" (UID: \"ebd3e8fc-3247-40b2-a454-f5def47b9fe5\") " pod="kube-system/kube-proxy-fmjrj" Dec 13 01:19:06.551723 kubelet[2738]: I1213 01:19:06.551651 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85hf\" (UniqueName: \"kubernetes.io/projected/ebd3e8fc-3247-40b2-a454-f5def47b9fe5-kube-api-access-p85hf\") pod \"kube-proxy-fmjrj\" (UID: \"ebd3e8fc-3247-40b2-a454-f5def47b9fe5\") " pod="kube-system/kube-proxy-fmjrj" Dec 13 01:19:06.585539 kubelet[2738]: I1213 01:19:06.585506 2738 topology_manager.go:215] "Topology Admit Handler" podUID="1e49bc98-630b-42bd-8702-b4a80320402a" podNamespace="kube-system" podName="cilium-operator-5cc964979-xbmzn" Dec 13 01:19:06.652212 kubelet[2738]: I1213 01:19:06.652051 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e49bc98-630b-42bd-8702-b4a80320402a-cilium-config-path\") pod \"cilium-operator-5cc964979-xbmzn\" (UID: \"1e49bc98-630b-42bd-8702-b4a80320402a\") " pod="kube-system/cilium-operator-5cc964979-xbmzn" Dec 13 01:19:06.652212 kubelet[2738]: I1213 01:19:06.652089 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9gt\" (UniqueName: \"kubernetes.io/projected/1e49bc98-630b-42bd-8702-b4a80320402a-kube-api-access-lb9gt\") pod \"cilium-operator-5cc964979-xbmzn\" (UID: \"1e49bc98-630b-42bd-8702-b4a80320402a\") " pod="kube-system/cilium-operator-5cc964979-xbmzn" Dec 13 01:19:06.791802 kubelet[2738]: E1213 01:19:06.791709 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.792477 containerd[1574]: time="2024-12-13T01:19:06.792439521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fmjrj,Uid:ebd3e8fc-3247-40b2-a454-f5def47b9fe5,Namespace:kube-system,Attempt:0,}" Dec 13 01:19:06.798162 kubelet[2738]: E1213 01:19:06.798140 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.798535 containerd[1574]: time="2024-12-13T01:19:06.798492492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dgzq,Uid:c63263af-e197-442c-b5a6-6b92d26110b3,Namespace:kube-system,Attempt:0,}" Dec 13 01:19:06.825812 containerd[1574]: time="2024-12-13T01:19:06.825374622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:19:06.826136 containerd[1574]: time="2024-12-13T01:19:06.825945806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:19:06.826136 containerd[1574]: time="2024-12-13T01:19:06.825962819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.826136 containerd[1574]: time="2024-12-13T01:19:06.826069635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.830153 containerd[1574]: time="2024-12-13T01:19:06.830092362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:19:06.830292 containerd[1574]: time="2024-12-13T01:19:06.830246630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:19:06.830292 containerd[1574]: time="2024-12-13T01:19:06.830267771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.830847 containerd[1574]: time="2024-12-13T01:19:06.830783077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.864542 containerd[1574]: time="2024-12-13T01:19:06.864501413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fmjrj,Uid:ebd3e8fc-3247-40b2-a454-f5def47b9fe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a169372794d57e5b932eef51eb77189a96b5e6b2b1af2cd61d3126ff6c807ac\"" Dec 13 01:19:06.865038 containerd[1574]: time="2024-12-13T01:19:06.864957996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dgzq,Uid:c63263af-e197-442c-b5a6-6b92d26110b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\"" Dec 13 01:19:06.865418 kubelet[2738]: E1213 01:19:06.865396 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.866255 kubelet[2738]: E1213 01:19:06.866224 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.867120 containerd[1574]: time="2024-12-13T01:19:06.867087112Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:19:06.867787 containerd[1574]: time="2024-12-13T01:19:06.867723162Z" level=info msg="CreateContainer within sandbox \"4a169372794d57e5b932eef51eb77189a96b5e6b2b1af2cd61d3126ff6c807ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:19:06.885147 containerd[1574]: time="2024-12-13T01:19:06.885080658Z" level=info msg="CreateContainer within sandbox \"4a169372794d57e5b932eef51eb77189a96b5e6b2b1af2cd61d3126ff6c807ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cc2b2b99f87894ca061195350fe02039d3e34a4dc4d117a01f994ce6ca8f08d\"" Dec 13 01:19:06.885544 containerd[1574]: time="2024-12-13T01:19:06.885515698Z" level=info msg="StartContainer for \"9cc2b2b99f87894ca061195350fe02039d3e34a4dc4d117a01f994ce6ca8f08d\"" Dec 13 01:19:06.890457 kubelet[2738]: E1213 01:19:06.890420 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:06.890755 containerd[1574]: time="2024-12-13T01:19:06.890728998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xbmzn,Uid:1e49bc98-630b-42bd-8702-b4a80320402a,Namespace:kube-system,Attempt:0,}" Dec 13 01:19:06.917992 containerd[1574]: time="2024-12-13T01:19:06.917807477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:19:06.917992 containerd[1574]: time="2024-12-13T01:19:06.917854528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:19:06.917992 containerd[1574]: time="2024-12-13T01:19:06.917865910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.917992 containerd[1574]: time="2024-12-13T01:19:06.917958729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:06.960364 containerd[1574]: time="2024-12-13T01:19:06.960303602Z" level=info msg="StartContainer for \"9cc2b2b99f87894ca061195350fe02039d3e34a4dc4d117a01f994ce6ca8f08d\" returns successfully" Dec 13 01:19:06.977410 containerd[1574]: time="2024-12-13T01:19:06.977367511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xbmzn,Uid:1e49bc98-630b-42bd-8702-b4a80320402a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\"" Dec 13 01:19:06.978109 kubelet[2738]: E1213 01:19:06.978088 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:07.401570 kubelet[2738]: E1213 01:19:07.401544 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:12.084388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880978964.mount: Deactivated successfully. Dec 13 01:19:13.420118 systemd-resolved[1459]: Under memory pressure, flushing caches. Dec 13 01:19:13.422126 systemd-journald[1158]: Under memory pressure, flushing caches. Dec 13 01:19:13.420164 systemd-resolved[1459]: Flushed all caches. Dec 13 01:19:14.283416 containerd[1574]: time="2024-12-13T01:19:14.283360616Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:19:14.284136 containerd[1574]: time="2024-12-13T01:19:14.284097588Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Dec 13 01:19:14.285300 containerd[1574]: time="2024-12-13T01:19:14.285259664Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:19:14.286720 containerd[1574]: time="2024-12-13T01:19:14.286689853Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.419573796s" Dec 13 01:19:14.286754 containerd[1574]: time="2024-12-13T01:19:14.286721754Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:19:14.287391 containerd[1574]: time="2024-12-13T01:19:14.287344717Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:19:14.288663 containerd[1574]: time="2024-12-13T01:19:14.288568922Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:19:14.300995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585564580.mount: Deactivated successfully. Dec 13 01:19:14.302467 containerd[1574]: time="2024-12-13T01:19:14.302430473Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\"" Dec 13 01:19:14.302963 containerd[1574]: time="2024-12-13T01:19:14.302934308Z" level=info msg="StartContainer for \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\"" Dec 13 01:19:14.353223 containerd[1574]: time="2024-12-13T01:19:14.353172053Z" level=info msg="StartContainer for \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\" returns successfully" Dec 13 01:19:14.752088 kubelet[2738]: E1213 01:19:14.699379 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:14.778295 kubelet[2738]: I1213 01:19:14.778263 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fmjrj" podStartSLOduration=8.778227807 podStartE2EDuration="8.778227807s" podCreationTimestamp="2024-12-13 01:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:19:07.411841649 +0000 UTC m=+16.134368046" watchObservedRunningTime="2024-12-13 01:19:14.778227807 +0000 UTC m=+23.500754204" Dec 13 01:19:14.780596 containerd[1574]: time="2024-12-13T01:19:14.780531299Z" level=info msg="shim disconnected" id=3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369 namespace=k8s.io Dec 13 01:19:14.780596 containerd[1574]: time="2024-12-13T01:19:14.780595262Z" level=warning msg="cleaning up after shim disconnected" id=3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369 namespace=k8s.io Dec 13 01:19:14.780695 containerd[1574]: time="2024-12-13T01:19:14.780604639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:19:14.792448 containerd[1574]: time="2024-12-13T01:19:14.792383651Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:19:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:19:15.298611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369-rootfs.mount: Deactivated successfully. Dec 13 01:19:15.702408 kubelet[2738]: E1213 01:19:15.702372 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:15.704168 containerd[1574]: time="2024-12-13T01:19:15.704119764Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:19:15.927532 containerd[1574]: time="2024-12-13T01:19:15.927473347Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\"" Dec 13 01:19:15.928192 containerd[1574]: time="2024-12-13T01:19:15.928141456Z" level=info msg="StartContainer for \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\"" Dec 13 01:19:15.980820 containerd[1574]: time="2024-12-13T01:19:15.980705240Z" level=info msg="StartContainer for \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\" returns successfully" Dec 13 01:19:15.991691 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:19:15.992295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:19:15.992370 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:19:15.998457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:19:16.016803 containerd[1574]: time="2024-12-13T01:19:16.016751629Z" level=info msg="shim disconnected" id=4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905 namespace=k8s.io Dec 13 01:19:16.016987 containerd[1574]: time="2024-12-13T01:19:16.016804180Z" level=warning msg="cleaning up after shim disconnected" id=4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905 namespace=k8s.io Dec 13 01:19:16.016987 containerd[1574]: time="2024-12-13T01:19:16.016814530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:19:16.024248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:19:16.029251 containerd[1574]: time="2024-12-13T01:19:16.029199523Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:19:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:19:16.298560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905-rootfs.mount: Deactivated successfully. Dec 13 01:19:16.705467 kubelet[2738]: E1213 01:19:16.705436 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:16.707506 containerd[1574]: time="2024-12-13T01:19:16.707458715Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:19:16.724003 containerd[1574]: time="2024-12-13T01:19:16.723961923Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\"" Dec 13 01:19:16.724448 containerd[1574]: time="2024-12-13T01:19:16.724411603Z" level=info msg="StartContainer for \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\"" Dec 13 01:19:16.782058 containerd[1574]: time="2024-12-13T01:19:16.781946782Z" level=info msg="StartContainer for \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\" returns successfully" Dec 13 01:19:16.805848 containerd[1574]: time="2024-12-13T01:19:16.805789104Z" level=info msg="shim disconnected" id=674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c namespace=k8s.io Dec 13 01:19:16.805848 containerd[1574]: time="2024-12-13T01:19:16.805839230Z" level=warning msg="cleaning up after shim disconnected" id=674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c namespace=k8s.io Dec 13 01:19:16.805848 containerd[1574]: time="2024-12-13T01:19:16.805848487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:19:16.850226 systemd[1]: Started sshd@7-10.0.0.160:22-10.0.0.1:49750.service - OpenSSH per-connection server daemon (10.0.0.1:49750). Dec 13 01:19:16.879920 sshd[3326]: Accepted publickey for core from 10.0.0.1 port 49750 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:16.881560 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:16.885772 systemd-logind[1549]: New session 8 of user core. Dec 13 01:19:16.892268 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:19:17.013138 sshd[3326]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:17.017487 systemd[1]: sshd@7-10.0.0.160:22-10.0.0.1:49750.service: Deactivated successfully. Dec 13 01:19:17.020874 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:19:17.021757 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:19:17.022941 systemd-logind[1549]: Removed session 8. Dec 13 01:19:17.298407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c-rootfs.mount: Deactivated successfully. Dec 13 01:19:17.427035 containerd[1574]: time="2024-12-13T01:19:17.426978589Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:19:17.427599 containerd[1574]: time="2024-12-13T01:19:17.427552937Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Dec 13 01:19:17.428666 containerd[1574]: time="2024-12-13T01:19:17.428626238Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:19:17.429944 containerd[1574]: time="2024-12-13T01:19:17.429904740Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.142518232s" Dec 13 01:19:17.429984 containerd[1574]: time="2024-12-13T01:19:17.429944467Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:19:17.431342 containerd[1574]: time="2024-12-13T01:19:17.431316628Z" level=info msg="CreateContainer within sandbox \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:19:17.441866 containerd[1574]: time="2024-12-13T01:19:17.441829205Z" level=info msg="CreateContainer within sandbox \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\"" Dec 13 01:19:17.442405 containerd[1574]: time="2024-12-13T01:19:17.442379076Z" level=info msg="StartContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\"" Dec 13 01:19:17.488826 containerd[1574]: time="2024-12-13T01:19:17.488783543Z" level=info msg="StartContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" returns successfully" Dec 13 01:19:17.713912 kubelet[2738]: E1213 01:19:17.713865 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:17.716730 containerd[1574]: time="2024-12-13T01:19:17.716681691Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:19:17.732128 kubelet[2738]: E1213 01:19:17.732099 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:17.747605 containerd[1574]: time="2024-12-13T01:19:17.747555414Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\"" Dec 13 01:19:17.750032 containerd[1574]: time="2024-12-13T01:19:17.748863914Z" level=info msg="StartContainer for \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\"" Dec 13 01:19:17.834455 containerd[1574]: time="2024-12-13T01:19:17.834416013Z" level=info msg="StartContainer for \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\" returns successfully" Dec 13 01:19:17.867927 containerd[1574]: time="2024-12-13T01:19:17.867852513Z" level=info msg="shim disconnected" id=9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a namespace=k8s.io Dec 13 01:19:17.867927 containerd[1574]: time="2024-12-13T01:19:17.867905534Z" level=warning msg="cleaning up after shim disconnected" id=9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a namespace=k8s.io Dec 13 01:19:17.867927 containerd[1574]: time="2024-12-13T01:19:17.867914932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:19:18.725921 kubelet[2738]: E1213 01:19:18.725894 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:18.725921 kubelet[2738]: E1213 01:19:18.725919 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:18.728138 containerd[1574]: time="2024-12-13T01:19:18.728101712Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:19:18.740583 kubelet[2738]: I1213 01:19:18.740547 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-xbmzn" podStartSLOduration=2.28910084 podStartE2EDuration="12.740508883s" podCreationTimestamp="2024-12-13 01:19:06 +0000 UTC" firstStartedPulling="2024-12-13 01:19:06.978688364 +0000 UTC m=+15.701214761" lastFinishedPulling="2024-12-13 01:19:17.430096407 +0000 UTC m=+26.152622804" observedRunningTime="2024-12-13 01:19:17.755985762 +0000 UTC m=+26.478512159" watchObservedRunningTime="2024-12-13 01:19:18.740508883 +0000 UTC m=+27.463035280" Dec 13 01:19:18.785608 containerd[1574]: time="2024-12-13T01:19:18.785565192Z" level=info msg="CreateContainer within sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\"" Dec 13 01:19:18.786120 containerd[1574]: time="2024-12-13T01:19:18.786083372Z" level=info msg="StartContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\"" Dec 13 01:19:18.838048 containerd[1574]: time="2024-12-13T01:19:18.837991455Z" level=info msg="StartContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" returns successfully" Dec 13 01:19:18.948599 kubelet[2738]: I1213 01:19:18.947922 2738 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:19:18.971633 kubelet[2738]: I1213 01:19:18.970124 2738 topology_manager.go:215] "Topology Admit Handler" podUID="cac885f5-f61f-4c09-8245-c34fa2c20855" podNamespace="kube-system" podName="coredns-76f75df574-nrfh6" Dec 13 01:19:18.971633 kubelet[2738]: I1213 01:19:18.971196 2738 topology_manager.go:215] "Topology Admit Handler" podUID="fc52f232-3111-4e82-9c71-a43b53de9754" podNamespace="kube-system" podName="coredns-76f75df574-9zj4f" Dec 13 01:19:19.036315 kubelet[2738]: I1213 01:19:19.036186 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdg2\" (UniqueName: \"kubernetes.io/projected/cac885f5-f61f-4c09-8245-c34fa2c20855-kube-api-access-wsdg2\") pod \"coredns-76f75df574-nrfh6\" (UID: \"cac885f5-f61f-4c09-8245-c34fa2c20855\") " pod="kube-system/coredns-76f75df574-nrfh6" Dec 13 01:19:19.036315 kubelet[2738]: I1213 01:19:19.036233 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc52f232-3111-4e82-9c71-a43b53de9754-config-volume\") pod \"coredns-76f75df574-9zj4f\" (UID: \"fc52f232-3111-4e82-9c71-a43b53de9754\") " pod="kube-system/coredns-76f75df574-9zj4f" Dec 13 01:19:19.036315 kubelet[2738]: I1213 01:19:19.036255 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cac885f5-f61f-4c09-8245-c34fa2c20855-config-volume\") pod \"coredns-76f75df574-nrfh6\" (UID: \"cac885f5-f61f-4c09-8245-c34fa2c20855\") " pod="kube-system/coredns-76f75df574-nrfh6" Dec 13 01:19:19.036490 kubelet[2738]: I1213 01:19:19.036345 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8k9\" (UniqueName: \"kubernetes.io/projected/fc52f232-3111-4e82-9c71-a43b53de9754-kube-api-access-vv8k9\") pod \"coredns-76f75df574-9zj4f\" (UID: \"fc52f232-3111-4e82-9c71-a43b53de9754\") " pod="kube-system/coredns-76f75df574-9zj4f" Dec 13 01:19:19.277288 kubelet[2738]: E1213 01:19:19.277260 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:19.277864 containerd[1574]: time="2024-12-13T01:19:19.277820912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrfh6,Uid:cac885f5-f61f-4c09-8245-c34fa2c20855,Namespace:kube-system,Attempt:0,}" Dec 13 01:19:19.279934 kubelet[2738]: E1213 01:19:19.279905 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:19.280323 containerd[1574]: time="2024-12-13T01:19:19.280292157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9zj4f,Uid:fc52f232-3111-4e82-9c71-a43b53de9754,Namespace:kube-system,Attempt:0,}" Dec 13 01:19:19.731060 kubelet[2738]: E1213 01:19:19.731032 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:20.732650 kubelet[2738]: E1213 01:19:20.732616 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:20.960090 systemd-networkd[1242]: cilium_host: Link UP Dec 13 01:19:20.960293 systemd-networkd[1242]: cilium_net: Link UP Dec 13 01:19:20.960297 systemd-networkd[1242]: cilium_net: Gained carrier Dec 13 01:19:20.960483 systemd-networkd[1242]: cilium_host: Gained carrier Dec 13 01:19:20.960684 systemd-networkd[1242]: cilium_host: Gained IPv6LL Dec 13 01:19:21.058941 systemd-networkd[1242]: cilium_vxlan: Link UP Dec 13 01:19:21.058949 systemd-networkd[1242]: cilium_vxlan: Gained carrier Dec 13 01:19:21.259039 kernel: NET: Registered PF_ALG protocol family Dec 13 01:19:21.734539 kubelet[2738]: E1213 01:19:21.734515 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:21.868186 systemd-networkd[1242]: cilium_net: Gained IPv6LL Dec 13 01:19:21.884049 systemd-networkd[1242]: lxc_health: Link UP Dec 13 01:19:21.896898 systemd-networkd[1242]: lxc_health: Gained carrier Dec 13 01:19:22.029225 systemd[1]: Started sshd@8-10.0.0.160:22-10.0.0.1:42332.service - OpenSSH per-connection server daemon (10.0.0.1:42332). Dec 13 01:19:22.059990 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 42332 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:22.061438 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:22.065560 systemd-logind[1549]: New session 9 of user core. Dec 13 01:19:22.072309 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:19:22.188038 sshd[3941]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:22.193101 systemd[1]: sshd@8-10.0.0.160:22-10.0.0.1:42332.service: Deactivated successfully. Dec 13 01:19:22.195913 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:19:22.195918 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:19:22.197672 systemd-logind[1549]: Removed session 9. Dec 13 01:19:22.371761 systemd-networkd[1242]: lxcabd9946aa9ac: Link UP Dec 13 01:19:22.382071 systemd-networkd[1242]: lxc09474a9b6b5a: Link UP Dec 13 01:19:22.393054 kernel: eth0: renamed from tmpdc83e Dec 13 01:19:22.400065 kernel: eth0: renamed from tmp68631 Dec 13 01:19:22.399767 systemd-networkd[1242]: lxc09474a9b6b5a: Gained carrier Dec 13 01:19:22.407658 systemd-networkd[1242]: lxcabd9946aa9ac: Gained carrier Dec 13 01:19:22.799897 kubelet[2738]: E1213 01:19:22.799807 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:22.813290 kubelet[2738]: I1213 01:19:22.813242 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9dgzq" podStartSLOduration=9.392849459 podStartE2EDuration="16.813209669s" podCreationTimestamp="2024-12-13 01:19:06 +0000 UTC" firstStartedPulling="2024-12-13 01:19:06.866657321 +0000 UTC m=+15.589183708" lastFinishedPulling="2024-12-13 01:19:14.287017531 +0000 UTC m=+23.009543918" observedRunningTime="2024-12-13 01:19:19.741385831 +0000 UTC m=+28.463912228" watchObservedRunningTime="2024-12-13 01:19:22.813209669 +0000 UTC m=+31.535736066" Dec 13 01:19:22.892171 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Dec 13 01:19:23.660151 systemd-networkd[1242]: lxc_health: Gained IPv6LL Dec 13 01:19:23.738090 kubelet[2738]: E1213 01:19:23.738048 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:23.788133 systemd-networkd[1242]: lxc09474a9b6b5a: Gained IPv6LL Dec 13 01:19:23.852083 systemd-networkd[1242]: lxcabd9946aa9ac: Gained IPv6LL Dec 13 01:19:25.703364 containerd[1574]: time="2024-12-13T01:19:25.703220984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:19:25.703364 containerd[1574]: time="2024-12-13T01:19:25.703279335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:19:25.703364 containerd[1574]: time="2024-12-13T01:19:25.703310744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:25.704203 containerd[1574]: time="2024-12-13T01:19:25.703933890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:25.708031 containerd[1574]: time="2024-12-13T01:19:25.707855269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:19:25.708031 containerd[1574]: time="2024-12-13T01:19:25.707897078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:19:25.708031 containerd[1574]: time="2024-12-13T01:19:25.707953645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:25.708256 containerd[1574]: time="2024-12-13T01:19:25.708070567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:19:25.730282 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:19:25.735565 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:19:25.757552 containerd[1574]: time="2024-12-13T01:19:25.757521182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9zj4f,Uid:fc52f232-3111-4e82-9c71-a43b53de9754,Namespace:kube-system,Attempt:0,} returns sandbox id \"6863157e82fd8dff761a72ad0f3d195dec66486862c5465e95910cffa359a307\"" Dec 13 01:19:25.758363 kubelet[2738]: E1213 01:19:25.758223 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:25.761845 containerd[1574]: time="2024-12-13T01:19:25.761768460Z" level=info msg="CreateContainer within sandbox \"6863157e82fd8dff761a72ad0f3d195dec66486862c5465e95910cffa359a307\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:19:25.764675 containerd[1574]: time="2024-12-13T01:19:25.764593032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrfh6,Uid:cac885f5-f61f-4c09-8245-c34fa2c20855,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc83eb154cd9db55436f3c49971e22dbc49579d2c975e382e8448422cc602433\"" Dec 13 01:19:25.765101 kubelet[2738]: E1213 01:19:25.765063 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:25.772229 containerd[1574]: time="2024-12-13T01:19:25.772171557Z" level=info msg="CreateContainer within sandbox \"dc83eb154cd9db55436f3c49971e22dbc49579d2c975e382e8448422cc602433\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:19:25.794325 containerd[1574]: time="2024-12-13T01:19:25.794203231Z" level=info msg="CreateContainer within sandbox \"dc83eb154cd9db55436f3c49971e22dbc49579d2c975e382e8448422cc602433\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09b73923e46da0dbbf9bb7ee2127e1f7bbaeae8533ddd1066632464d91f7fccd\"" Dec 13 01:19:25.795552 containerd[1574]: time="2024-12-13T01:19:25.794871832Z" level=info msg="StartContainer for \"09b73923e46da0dbbf9bb7ee2127e1f7bbaeae8533ddd1066632464d91f7fccd\"" Dec 13 01:19:25.796244 containerd[1574]: time="2024-12-13T01:19:25.796221279Z" level=info msg="CreateContainer within sandbox \"6863157e82fd8dff761a72ad0f3d195dec66486862c5465e95910cffa359a307\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35069ebbb55e53329e5ebc961f163951bcaab178067de729e6f452ba12819f30\"" Dec 13 01:19:25.796519 containerd[1574]: time="2024-12-13T01:19:25.796501422Z" level=info msg="StartContainer for \"35069ebbb55e53329e5ebc961f163951bcaab178067de729e6f452ba12819f30\"" Dec 13 01:19:25.861740 containerd[1574]: time="2024-12-13T01:19:25.861693867Z" level=info msg="StartContainer for \"09b73923e46da0dbbf9bb7ee2127e1f7bbaeae8533ddd1066632464d91f7fccd\" returns successfully" Dec 13 01:19:25.861867 containerd[1574]: time="2024-12-13T01:19:25.861732180Z" level=info msg="StartContainer for \"35069ebbb55e53329e5ebc961f163951bcaab178067de729e6f452ba12819f30\" returns successfully" Dec 13 01:19:26.709353 systemd[1]: run-containerd-runc-k8s.io-dc83eb154cd9db55436f3c49971e22dbc49579d2c975e382e8448422cc602433-runc.xZm7X3.mount: Deactivated successfully. Dec 13 01:19:26.744101 kubelet[2738]: E1213 01:19:26.743991 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:26.753390 kubelet[2738]: I1213 01:19:26.753354 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9zj4f" podStartSLOduration=20.752368668 podStartE2EDuration="20.752368668s" podCreationTimestamp="2024-12-13 01:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:19:26.750738329 +0000 UTC m=+35.473264726" watchObservedRunningTime="2024-12-13 01:19:26.752368668 +0000 UTC m=+35.474895065" Dec 13 01:19:26.754877 kubelet[2738]: E1213 01:19:26.754240 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:26.767580 kubelet[2738]: I1213 01:19:26.767184 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nrfh6" podStartSLOduration=20.767142766 podStartE2EDuration="20.767142766s" podCreationTimestamp="2024-12-13 01:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:19:26.766768415 +0000 UTC m=+35.489294822" watchObservedRunningTime="2024-12-13 01:19:26.767142766 +0000 UTC m=+35.489669163" Dec 13 01:19:27.198211 systemd[1]: Started sshd@9-10.0.0.160:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). Dec 13 01:19:27.229105 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:27.230767 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:27.234500 systemd-logind[1549]: New session 10 of user core. Dec 13 01:19:27.246257 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:19:27.361359 sshd[4160]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:27.365150 systemd[1]: sshd@9-10.0.0.160:22-10.0.0.1:42342.service: Deactivated successfully. Dec 13 01:19:27.367786 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:19:27.368549 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:19:27.369460 systemd-logind[1549]: Removed session 10. Dec 13 01:19:27.755945 kubelet[2738]: E1213 01:19:27.755923 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:27.756144 kubelet[2738]: E1213 01:19:27.755968 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:28.757979 kubelet[2738]: E1213 01:19:28.757938 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:28.758499 kubelet[2738]: E1213 01:19:28.758151 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:19:32.376217 systemd[1]: Started sshd@10-10.0.0.160:22-10.0.0.1:54080.service - OpenSSH per-connection server daemon (10.0.0.1:54080). Dec 13 01:19:32.403900 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 54080 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:32.405464 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:32.408792 systemd-logind[1549]: New session 11 of user core. Dec 13 01:19:32.416265 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:19:32.519858 sshd[4176]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:32.530221 systemd[1]: Started sshd@11-10.0.0.160:22-10.0.0.1:54090.service - OpenSSH per-connection server daemon (10.0.0.1:54090). Dec 13 01:19:32.530682 systemd[1]: sshd@10-10.0.0.160:22-10.0.0.1:54080.service: Deactivated successfully. Dec 13 01:19:32.533253 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:19:32.535828 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:19:32.536762 systemd-logind[1549]: Removed session 11. Dec 13 01:19:32.559671 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 54090 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:32.561170 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:32.564764 systemd-logind[1549]: New session 12 of user core. Dec 13 01:19:32.574237 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:19:32.707577 sshd[4189]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:32.710979 systemd[1]: sshd@11-10.0.0.160:22-10.0.0.1:54090.service: Deactivated successfully. Dec 13 01:19:32.714682 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:19:32.715506 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:19:32.723330 systemd[1]: Started sshd@12-10.0.0.160:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Dec 13 01:19:32.725382 systemd-logind[1549]: Removed session 12. Dec 13 01:19:32.754669 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:32.756203 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:32.759937 systemd-logind[1549]: New session 13 of user core. Dec 13 01:19:32.765283 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:19:32.870710 sshd[4206]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:32.874747 systemd[1]: sshd@12-10.0.0.160:22-10.0.0.1:54096.service: Deactivated successfully. Dec 13 01:19:32.877393 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:19:32.878161 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:19:32.878980 systemd-logind[1549]: Removed session 13. Dec 13 01:19:37.882245 systemd[1]: Started sshd@13-10.0.0.160:22-10.0.0.1:54106.service - OpenSSH per-connection server daemon (10.0.0.1:54106). Dec 13 01:19:37.909989 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 54106 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:37.911381 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:37.914989 systemd-logind[1549]: New session 14 of user core. Dec 13 01:19:37.924240 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:19:38.029244 sshd[4223]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:38.034224 systemd[1]: sshd@13-10.0.0.160:22-10.0.0.1:54106.service: Deactivated successfully. Dec 13 01:19:38.036787 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:19:38.037064 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:19:38.038214 systemd-logind[1549]: Removed session 14. Dec 13 01:19:43.044264 systemd[1]: Started sshd@14-10.0.0.160:22-10.0.0.1:50446.service - OpenSSH per-connection server daemon (10.0.0.1:50446). Dec 13 01:19:43.071744 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 50446 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:43.073153 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:43.076736 systemd-logind[1549]: New session 15 of user core. Dec 13 01:19:43.083242 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:19:43.184090 sshd[4238]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:43.191315 systemd[1]: Started sshd@15-10.0.0.160:22-10.0.0.1:50454.service - OpenSSH per-connection server daemon (10.0.0.1:50454). Dec 13 01:19:43.192134 systemd[1]: sshd@14-10.0.0.160:22-10.0.0.1:50446.service: Deactivated successfully. Dec 13 01:19:43.194975 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:19:43.197303 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:19:43.198369 systemd-logind[1549]: Removed session 15. Dec 13 01:19:43.221386 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 50454 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:43.222983 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:43.227072 systemd-logind[1549]: New session 16 of user core. Dec 13 01:19:43.237312 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:19:43.417747 sshd[4250]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:43.425265 systemd[1]: Started sshd@16-10.0.0.160:22-10.0.0.1:50470.service - OpenSSH per-connection server daemon (10.0.0.1:50470). Dec 13 01:19:43.426144 systemd[1]: sshd@15-10.0.0.160:22-10.0.0.1:50454.service: Deactivated successfully. Dec 13 01:19:43.429311 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:19:43.430203 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:19:43.431383 systemd-logind[1549]: Removed session 16. Dec 13 01:19:43.458846 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 50470 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:43.460459 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:43.464331 systemd-logind[1549]: New session 17 of user core. Dec 13 01:19:43.478253 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:19:44.743834 sshd[4263]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:44.752402 systemd[1]: Started sshd@17-10.0.0.160:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Dec 13 01:19:44.753768 systemd[1]: sshd@16-10.0.0.160:22-10.0.0.1:50470.service: Deactivated successfully. Dec 13 01:19:44.759310 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:19:44.760733 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:19:44.764842 systemd-logind[1549]: Removed session 17. Dec 13 01:19:44.784592 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:44.786083 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:44.790066 systemd-logind[1549]: New session 18 of user core. Dec 13 01:19:44.801263 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:19:45.032654 sshd[4282]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:45.045309 systemd[1]: Started sshd@18-10.0.0.160:22-10.0.0.1:50484.service - OpenSSH per-connection server daemon (10.0.0.1:50484). Dec 13 01:19:45.046131 systemd[1]: sshd@17-10.0.0.160:22-10.0.0.1:50474.service: Deactivated successfully. Dec 13 01:19:45.049491 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:19:45.050607 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:19:45.051793 systemd-logind[1549]: Removed session 18. Dec 13 01:19:45.073145 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 50484 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:45.074799 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:45.078701 systemd-logind[1549]: New session 19 of user core. Dec 13 01:19:45.087253 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:19:45.197194 sshd[4298]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:45.200671 systemd[1]: sshd@18-10.0.0.160:22-10.0.0.1:50484.service: Deactivated successfully. Dec 13 01:19:45.204053 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:19:45.204744 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:19:45.205782 systemd-logind[1549]: Removed session 19. Dec 13 01:19:50.213208 systemd[1]: Started sshd@19-10.0.0.160:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Dec 13 01:19:50.240590 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:50.242026 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:50.245579 systemd-logind[1549]: New session 20 of user core. Dec 13 01:19:50.255256 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:19:50.359432 sshd[4319]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:50.362804 systemd[1]: sshd@19-10.0.0.160:22-10.0.0.1:50776.service: Deactivated successfully. Dec 13 01:19:50.364926 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:19:50.365050 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:19:50.366002 systemd-logind[1549]: Removed session 20. Dec 13 01:19:55.371241 systemd[1]: Started sshd@20-10.0.0.160:22-10.0.0.1:50780.service - OpenSSH per-connection server daemon (10.0.0.1:50780). Dec 13 01:19:55.399798 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 50780 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:19:55.401252 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:19:55.405022 systemd-logind[1549]: New session 21 of user core. Dec 13 01:19:55.412246 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:19:55.515385 sshd[4336]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:55.519111 systemd[1]: sshd@20-10.0.0.160:22-10.0.0.1:50780.service: Deactivated successfully. Dec 13 01:19:55.521680 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:19:55.521775 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:19:55.523005 systemd-logind[1549]: Removed session 21. Dec 13 01:20:00.528273 systemd[1]: Started sshd@21-10.0.0.160:22-10.0.0.1:60980.service - OpenSSH per-connection server daemon (10.0.0.1:60980). Dec 13 01:20:00.556221 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 60980 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:00.557859 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:00.561679 systemd-logind[1549]: New session 22 of user core. Dec 13 01:20:00.568258 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:20:00.669792 sshd[4351]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:00.673887 systemd[1]: sshd@21-10.0.0.160:22-10.0.0.1:60980.service: Deactivated successfully. Dec 13 01:20:00.676324 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:20:00.676380 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:20:00.677303 systemd-logind[1549]: Removed session 22. Dec 13 01:20:05.680221 systemd[1]: Started sshd@22-10.0.0.160:22-10.0.0.1:60984.service - OpenSSH per-connection server daemon (10.0.0.1:60984). Dec 13 01:20:05.708596 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:05.710064 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:05.713630 systemd-logind[1549]: New session 23 of user core. Dec 13 01:20:05.723257 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:20:05.825707 sshd[4366]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:05.832233 systemd[1]: Started sshd@23-10.0.0.160:22-10.0.0.1:32768.service - OpenSSH per-connection server daemon (10.0.0.1:32768). Dec 13 01:20:05.832693 systemd[1]: sshd@22-10.0.0.160:22-10.0.0.1:60984.service: Deactivated successfully. Dec 13 01:20:05.836141 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:20:05.836807 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:20:05.837597 systemd-logind[1549]: Removed session 23. Dec 13 01:20:05.861126 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 32768 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:05.862544 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:05.866245 systemd-logind[1549]: New session 24 of user core. Dec 13 01:20:05.875281 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:20:07.198993 containerd[1574]: time="2024-12-13T01:20:07.198934958Z" level=info msg="StopContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" with timeout 30 (s)" Dec 13 01:20:07.199523 containerd[1574]: time="2024-12-13T01:20:07.199307316Z" level=info msg="Stop container \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" with signal terminated" Dec 13 01:20:07.242793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14-rootfs.mount: Deactivated successfully. Dec 13 01:20:07.248349 containerd[1574]: time="2024-12-13T01:20:07.248279858Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:20:07.249643 containerd[1574]: time="2024-12-13T01:20:07.249608976Z" level=info msg="StopContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" with timeout 2 (s)" Dec 13 01:20:07.249832 containerd[1574]: time="2024-12-13T01:20:07.249796082Z" level=info msg="Stop container \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" with signal terminated" Dec 13 01:20:07.252341 containerd[1574]: time="2024-12-13T01:20:07.252276170Z" level=info msg="shim disconnected" id=9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14 namespace=k8s.io Dec 13 01:20:07.252341 containerd[1574]: time="2024-12-13T01:20:07.252325343Z" level=warning msg="cleaning up after shim disconnected" id=9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14 namespace=k8s.io Dec 13 01:20:07.252341 containerd[1574]: time="2024-12-13T01:20:07.252333428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:07.256784 systemd-networkd[1242]: lxc_health: Link DOWN Dec 13 01:20:07.256795 systemd-networkd[1242]: lxc_health: Lost carrier Dec 13 01:20:07.281563 containerd[1574]: time="2024-12-13T01:20:07.281508629Z" level=info msg="StopContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" returns successfully" Dec 13 01:20:07.282229 containerd[1574]: time="2024-12-13T01:20:07.282182170Z" level=info msg="StopPodSandbox for \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\"" Dec 13 01:20:07.282229 containerd[1574]: time="2024-12-13T01:20:07.282226745Z" level=info msg="Container to stop \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.285249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33-shm.mount: Deactivated successfully. Dec 13 01:20:07.298069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870-rootfs.mount: Deactivated successfully. Dec 13 01:20:07.304029 containerd[1574]: time="2024-12-13T01:20:07.303946923Z" level=info msg="shim disconnected" id=637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870 namespace=k8s.io Dec 13 01:20:07.304029 containerd[1574]: time="2024-12-13T01:20:07.304022106Z" level=warning msg="cleaning up after shim disconnected" id=637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870 namespace=k8s.io Dec 13 01:20:07.304029 containerd[1574]: time="2024-12-13T01:20:07.304031665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:07.314622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33-rootfs.mount: Deactivated successfully. Dec 13 01:20:07.317134 containerd[1574]: time="2024-12-13T01:20:07.316211475Z" level=info msg="shim disconnected" id=c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33 namespace=k8s.io Dec 13 01:20:07.317134 containerd[1574]: time="2024-12-13T01:20:07.316982661Z" level=warning msg="cleaning up after shim disconnected" id=c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33 namespace=k8s.io Dec 13 01:20:07.317134 containerd[1574]: time="2024-12-13T01:20:07.316993383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:07.319212 containerd[1574]: time="2024-12-13T01:20:07.319172809Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:20:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:20:07.323039 containerd[1574]: time="2024-12-13T01:20:07.322982796Z" level=info msg="StopContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" returns successfully" Dec 13 01:20:07.323628 containerd[1574]: time="2024-12-13T01:20:07.323597275Z" level=info msg="StopPodSandbox for \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\"" Dec 13 01:20:07.323671 containerd[1574]: time="2024-12-13T01:20:07.323629376Z" level=info msg="Container to stop \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.323671 containerd[1574]: time="2024-12-13T01:20:07.323642020Z" level=info msg="Container to stop \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.323671 containerd[1574]: time="2024-12-13T01:20:07.323669102Z" level=info msg="Container to stop \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.323743 containerd[1574]: time="2024-12-13T01:20:07.323679431Z" level=info msg="Container to stop \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.323743 containerd[1574]: time="2024-12-13T01:20:07.323689942Z" level=info msg="Container to stop \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:20:07.331052 containerd[1574]: time="2024-12-13T01:20:07.330972946Z" level=info msg="TearDown network for sandbox \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\" successfully" Dec 13 01:20:07.331052 containerd[1574]: time="2024-12-13T01:20:07.330999997Z" level=info msg="StopPodSandbox for \"c0b79900899aec540b7ca588202aa569508c8edb177bd57839e90f559d665e33\" returns successfully" Dec 13 01:20:07.352948 containerd[1574]: time="2024-12-13T01:20:07.352886312Z" level=info msg="shim disconnected" id=df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82 namespace=k8s.io Dec 13 01:20:07.352948 containerd[1574]: time="2024-12-13T01:20:07.352939664Z" level=warning msg="cleaning up after shim disconnected" id=df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82 namespace=k8s.io Dec 13 01:20:07.352948 containerd[1574]: time="2024-12-13T01:20:07.352952738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:07.369966 containerd[1574]: time="2024-12-13T01:20:07.369908504Z" level=info msg="TearDown network for sandbox \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" successfully" Dec 13 01:20:07.369966 containerd[1574]: time="2024-12-13T01:20:07.369950553Z" level=info msg="StopPodSandbox for \"df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82\" returns successfully" Dec 13 01:20:07.444067 kubelet[2738]: I1213 01:20:07.443991 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e49bc98-630b-42bd-8702-b4a80320402a-cilium-config-path\") pod \"1e49bc98-630b-42bd-8702-b4a80320402a\" (UID: \"1e49bc98-630b-42bd-8702-b4a80320402a\") " Dec 13 01:20:07.444549 kubelet[2738]: I1213 01:20:07.444085 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb9gt\" (UniqueName: \"kubernetes.io/projected/1e49bc98-630b-42bd-8702-b4a80320402a-kube-api-access-lb9gt\") pod \"1e49bc98-630b-42bd-8702-b4a80320402a\" (UID: \"1e49bc98-630b-42bd-8702-b4a80320402a\") " Dec 13 01:20:07.447292 kubelet[2738]: I1213 01:20:07.447261 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e49bc98-630b-42bd-8702-b4a80320402a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e49bc98-630b-42bd-8702-b4a80320402a" (UID: "1e49bc98-630b-42bd-8702-b4a80320402a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:20:07.448709 kubelet[2738]: I1213 01:20:07.448673 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e49bc98-630b-42bd-8702-b4a80320402a-kube-api-access-lb9gt" (OuterVolumeSpecName: "kube-api-access-lb9gt") pod "1e49bc98-630b-42bd-8702-b4a80320402a" (UID: "1e49bc98-630b-42bd-8702-b4a80320402a"). InnerVolumeSpecName "kube-api-access-lb9gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:20:07.545048 kubelet[2738]: I1213 01:20:07.544918 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-hubble-tls\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545048 kubelet[2738]: I1213 01:20:07.544952 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-bpf-maps\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545048 kubelet[2738]: I1213 01:20:07.544976 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63263af-e197-442c-b5a6-6b92d26110b3-clustermesh-secrets\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545048 kubelet[2738]: I1213 01:20:07.544997 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-xtables-lock\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545048 kubelet[2738]: I1213 01:20:07.545036 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-cgroup\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545062 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48js7\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-kube-api-access-48js7\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545081 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-kernel\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545099 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-etc-cni-netd\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545116 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-net\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545132 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-run\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545257 kubelet[2738]: I1213 01:20:07.545148 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-lib-modules\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545164 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-hostproc\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545181 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cni-path\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545199 2738 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-config-path\") pod \"c63263af-e197-442c-b5a6-6b92d26110b3\" (UID: \"c63263af-e197-442c-b5a6-6b92d26110b3\") " Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545233 2738 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e49bc98-630b-42bd-8702-b4a80320402a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545247 2738 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lb9gt\" (UniqueName: \"kubernetes.io/projected/1e49bc98-630b-42bd-8702-b4a80320402a-kube-api-access-lb9gt\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.545401 kubelet[2738]: I1213 01:20:07.545371 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.545548 kubelet[2738]: I1213 01:20:07.545404 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.545548 kubelet[2738]: I1213 01:20:07.545423 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548212 kubelet[2738]: I1213 01:20:07.547983 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:20:07.548212 kubelet[2738]: I1213 01:20:07.548070 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548212 kubelet[2738]: I1213 01:20:07.548092 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548212 kubelet[2738]: I1213 01:20:07.548110 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548212 kubelet[2738]: I1213 01:20:07.548130 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548374 kubelet[2738]: I1213 01:20:07.548148 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548374 kubelet[2738]: I1213 01:20:07.548165 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548539 kubelet[2738]: I1213 01:20:07.548198 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:20:07.548608 kubelet[2738]: I1213 01:20:07.548566 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:20:07.548956 kubelet[2738]: I1213 01:20:07.548915 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63263af-e197-442c-b5a6-6b92d26110b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:20:07.549330 kubelet[2738]: I1213 01:20:07.549306 2738 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-kube-api-access-48js7" (OuterVolumeSpecName: "kube-api-access-48js7") pod "c63263af-e197-442c-b5a6-6b92d26110b3" (UID: "c63263af-e197-442c-b5a6-6b92d26110b3"). InnerVolumeSpecName "kube-api-access-48js7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:20:07.645629 kubelet[2738]: I1213 01:20:07.645603 2738 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63263af-e197-442c-b5a6-6b92d26110b3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645629 kubelet[2738]: I1213 01:20:07.645624 2738 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645635 2738 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645646 2738 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48js7\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-kube-api-access-48js7\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645656 2738 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645668 2738 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645677 2738 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645686 2738 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645695 2738 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645709 kubelet[2738]: I1213 01:20:07.645705 2738 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645879 kubelet[2738]: I1213 01:20:07.645714 2738 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645879 kubelet[2738]: I1213 01:20:07.645724 2738 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63263af-e197-442c-b5a6-6b92d26110b3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645879 kubelet[2738]: I1213 01:20:07.645733 2738 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63263af-e197-442c-b5a6-6b92d26110b3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.645879 kubelet[2738]: I1213 01:20:07.645742 2738 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63263af-e197-442c-b5a6-6b92d26110b3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:20:07.824064 kubelet[2738]: I1213 01:20:07.824037 2738 scope.go:117] "RemoveContainer" containerID="637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870" Dec 13 01:20:07.826128 containerd[1574]: time="2024-12-13T01:20:07.826090988Z" level=info msg="RemoveContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\"" Dec 13 01:20:07.829689 containerd[1574]: time="2024-12-13T01:20:07.829652713Z" level=info msg="RemoveContainer for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" returns successfully" Dec 13 01:20:07.829973 kubelet[2738]: I1213 01:20:07.829948 2738 scope.go:117] "RemoveContainer" containerID="9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a" Dec 13 01:20:07.830820 containerd[1574]: time="2024-12-13T01:20:07.830758607Z" level=info msg="RemoveContainer for \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\"" Dec 13 01:20:07.834205 containerd[1574]: time="2024-12-13T01:20:07.834079695Z" level=info msg="RemoveContainer for \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\" returns successfully" Dec 13 01:20:07.834606 kubelet[2738]: I1213 01:20:07.834286 2738 scope.go:117] "RemoveContainer" containerID="674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c" Dec 13 01:20:07.835405 containerd[1574]: time="2024-12-13T01:20:07.835372815Z" level=info msg="RemoveContainer for \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\"" Dec 13 01:20:07.846889 containerd[1574]: time="2024-12-13T01:20:07.846851050Z" level=info msg="RemoveContainer for \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\" returns successfully" Dec 13 01:20:07.847058 kubelet[2738]: I1213 01:20:07.847031 2738 scope.go:117] "RemoveContainer" containerID="4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905" Dec 13 01:20:07.847983 containerd[1574]: time="2024-12-13T01:20:07.847952125Z" level=info msg="RemoveContainer for \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\"" Dec 13 01:20:07.864347 containerd[1574]: time="2024-12-13T01:20:07.864319000Z" level=info msg="RemoveContainer for \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\" returns successfully" Dec 13 01:20:07.864549 kubelet[2738]: I1213 01:20:07.864472 2738 scope.go:117] "RemoveContainer" containerID="3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369" Dec 13 01:20:07.865352 containerd[1574]: time="2024-12-13T01:20:07.865319112Z" level=info msg="RemoveContainer for \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\"" Dec 13 01:20:07.868179 containerd[1574]: time="2024-12-13T01:20:07.868153203Z" level=info msg="RemoveContainer for \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\" returns successfully" Dec 13 01:20:07.868332 kubelet[2738]: I1213 01:20:07.868302 2738 scope.go:117] "RemoveContainer" containerID="637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870" Dec 13 01:20:07.868568 containerd[1574]: time="2024-12-13T01:20:07.868530210Z" level=error msg="ContainerStatus for \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\": not found" Dec 13 01:20:07.868697 kubelet[2738]: E1213 01:20:07.868678 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\": not found" containerID="637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870" Dec 13 01:20:07.868777 kubelet[2738]: I1213 01:20:07.868762 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870"} err="failed to get container status \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\": rpc error: code = NotFound desc = an error occurred when try to find container \"637e2f9efd39b86e4159862c9b39d461025c2ee89beea65ba51cb92c3702f870\": not found" Dec 13 01:20:07.868806 kubelet[2738]: I1213 01:20:07.868778 2738 scope.go:117] "RemoveContainer" containerID="9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a" Dec 13 01:20:07.868931 containerd[1574]: time="2024-12-13T01:20:07.868897599Z" level=error msg="ContainerStatus for \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\": not found" Dec 13 01:20:07.869105 kubelet[2738]: E1213 01:20:07.869088 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\": not found" containerID="9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a" Dec 13 01:20:07.869144 kubelet[2738]: I1213 01:20:07.869124 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a"} err="failed to get container status \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9151eedeba5e087988fb2459e0a7260f639184de6ff94a3622a21f0ae5dd054a\": not found" Dec 13 01:20:07.869144 kubelet[2738]: I1213 01:20:07.869135 2738 scope.go:117] "RemoveContainer" containerID="674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c" Dec 13 01:20:07.869311 containerd[1574]: time="2024-12-13T01:20:07.869267713Z" level=error msg="ContainerStatus for \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\": not found" Dec 13 01:20:07.869391 kubelet[2738]: E1213 01:20:07.869375 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\": not found" containerID="674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c" Dec 13 01:20:07.869428 kubelet[2738]: I1213 01:20:07.869400 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c"} err="failed to get container status \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"674a12da19b58bd6436f6bd0e7043070d4e4c9407be2ad79873d0fb91eee3b7c\": not found" Dec 13 01:20:07.869428 kubelet[2738]: I1213 01:20:07.869410 2738 scope.go:117] "RemoveContainer" containerID="4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905" Dec 13 01:20:07.869599 containerd[1574]: time="2024-12-13T01:20:07.869563315Z" level=error msg="ContainerStatus for \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\": not found" Dec 13 01:20:07.869686 kubelet[2738]: E1213 01:20:07.869669 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\": not found" containerID="4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905" Dec 13 01:20:07.869728 kubelet[2738]: I1213 01:20:07.869691 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905"} err="failed to get container status \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\": rpc error: code = NotFound desc = an error occurred when try to find container \"4529c480fbb08624d116318dd0249e9f31de4746716034f721abab60c4704905\": not found" Dec 13 01:20:07.869728 kubelet[2738]: I1213 01:20:07.869703 2738 scope.go:117] "RemoveContainer" containerID="3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369" Dec 13 01:20:07.869929 containerd[1574]: time="2024-12-13T01:20:07.869893454Z" level=error msg="ContainerStatus for \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\": not found" Dec 13 01:20:07.870079 kubelet[2738]: E1213 01:20:07.870056 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\": not found" containerID="3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369" Dec 13 01:20:07.870118 kubelet[2738]: I1213 01:20:07.870093 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369"} err="failed to get container status \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d7edd8c658ad8dd5d39ce5ab6e2614e6626be17487fe5ba38135e3f0feda369\": not found" Dec 13 01:20:07.870118 kubelet[2738]: I1213 01:20:07.870106 2738 scope.go:117] "RemoveContainer" containerID="9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14" Dec 13 01:20:07.870882 containerd[1574]: time="2024-12-13T01:20:07.870856255Z" level=info msg="RemoveContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\"" Dec 13 01:20:07.873627 containerd[1574]: time="2024-12-13T01:20:07.873600185Z" level=info msg="RemoveContainer for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" returns successfully" Dec 13 01:20:07.873778 kubelet[2738]: I1213 01:20:07.873748 2738 scope.go:117] "RemoveContainer" containerID="9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14" Dec 13 01:20:07.873952 containerd[1574]: time="2024-12-13T01:20:07.873922168Z" level=error msg="ContainerStatus for \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\": not found" Dec 13 01:20:07.874056 kubelet[2738]: E1213 01:20:07.874040 2738 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\": not found" containerID="9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14" Dec 13 01:20:07.874108 kubelet[2738]: I1213 01:20:07.874067 2738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14"} err="failed to get container status \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\": rpc error: code = NotFound desc = an error occurred when try to find container \"9204ada814c70f48d00fbef6d4c9e7aadec773a141835dd4e4a3cb5753d24e14\": not found" Dec 13 01:20:08.226303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82-rootfs.mount: Deactivated successfully. Dec 13 01:20:08.226505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df8293fa580c74d1edd5976631a49169217707755e61fb9b4754e487dec12e82-shm.mount: Deactivated successfully. Dec 13 01:20:08.226649 systemd[1]: var-lib-kubelet-pods-1e49bc98\x2d630b\x2d42bd\x2d8702\x2db4a80320402a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlb9gt.mount: Deactivated successfully. Dec 13 01:20:08.226793 systemd[1]: var-lib-kubelet-pods-c63263af\x2de197\x2d442c\x2db5a6\x2d6b92d26110b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48js7.mount: Deactivated successfully. Dec 13 01:20:08.226939 systemd[1]: var-lib-kubelet-pods-c63263af\x2de197\x2d442c\x2db5a6\x2d6b92d26110b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:20:08.227199 systemd[1]: var-lib-kubelet-pods-c63263af\x2de197\x2d442c\x2db5a6\x2d6b92d26110b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:20:08.371078 kubelet[2738]: E1213 01:20:08.371044 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:09.167196 sshd[4378]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:09.178212 systemd[1]: Started sshd@24-10.0.0.160:22-10.0.0.1:54950.service - OpenSSH per-connection server daemon (10.0.0.1:54950). Dec 13 01:20:09.178682 systemd[1]: sshd@23-10.0.0.160:22-10.0.0.1:32768.service: Deactivated successfully. Dec 13 01:20:09.182625 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:20:09.183301 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:20:09.184276 systemd-logind[1549]: Removed session 24. Dec 13 01:20:09.210203 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 54950 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:09.211632 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:09.215339 systemd-logind[1549]: New session 25 of user core. Dec 13 01:20:09.224251 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:20:09.373632 kubelet[2738]: I1213 01:20:09.373591 2738 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1e49bc98-630b-42bd-8702-b4a80320402a" path="/var/lib/kubelet/pods/1e49bc98-630b-42bd-8702-b4a80320402a/volumes" Dec 13 01:20:09.374214 kubelet[2738]: I1213 01:20:09.374190 2738 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" path="/var/lib/kubelet/pods/c63263af-e197-442c-b5a6-6b92d26110b3/volumes" Dec 13 01:20:09.880510 sshd[4544]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:09.892459 kubelet[2738]: I1213 01:20:09.887938 2738 topology_manager.go:215] "Topology Admit Handler" podUID="3d0c8bdd-05ea-4d1e-828e-576313c29620" podNamespace="kube-system" podName="cilium-27dd7" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.887997 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="apply-sysctl-overwrites" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.888006 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="mount-bpf-fs" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.888029 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="clean-cilium-state" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.888037 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="mount-cgroup" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.888044 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e49bc98-630b-42bd-8702-b4a80320402a" containerName="cilium-operator" Dec 13 01:20:09.892459 kubelet[2738]: E1213 01:20:09.888050 2738 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="cilium-agent" Dec 13 01:20:09.892459 kubelet[2738]: I1213 01:20:09.888070 2738 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63263af-e197-442c-b5a6-6b92d26110b3" containerName="cilium-agent" Dec 13 01:20:09.892459 kubelet[2738]: I1213 01:20:09.888077 2738 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e49bc98-630b-42bd-8702-b4a80320402a" containerName="cilium-operator" Dec 13 01:20:09.895307 systemd[1]: Started sshd@25-10.0.0.160:22-10.0.0.1:54960.service - OpenSSH per-connection server daemon (10.0.0.1:54960). Dec 13 01:20:09.895813 systemd[1]: sshd@24-10.0.0.160:22-10.0.0.1:54950.service: Deactivated successfully. Dec 13 01:20:09.910995 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:20:09.912291 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:20:09.914240 systemd-logind[1549]: Removed session 25. Dec 13 01:20:09.941082 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 54960 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:09.942532 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:09.946763 systemd-logind[1549]: New session 26 of user core. Dec 13 01:20:09.961252 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:20:10.011326 sshd[4558]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:10.019240 systemd[1]: Started sshd@26-10.0.0.160:22-10.0.0.1:54972.service - OpenSSH per-connection server daemon (10.0.0.1:54972). Dec 13 01:20:10.019706 systemd[1]: sshd@25-10.0.0.160:22-10.0.0.1:54960.service: Deactivated successfully. Dec 13 01:20:10.023280 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:20:10.024300 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:20:10.025123 systemd-logind[1549]: Removed session 26. Dec 13 01:20:10.047042 sshd[4567]: Accepted publickey for core from 10.0.0.1 port 54972 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:20:10.048691 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:20:10.053618 systemd-logind[1549]: New session 27 of user core. Dec 13 01:20:10.058053 kubelet[2738]: I1213 01:20:10.058005 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-hostproc\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058190 kubelet[2738]: I1213 01:20:10.058072 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d0c8bdd-05ea-4d1e-828e-576313c29620-cilium-ipsec-secrets\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058190 kubelet[2738]: I1213 01:20:10.058095 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhxf\" (UniqueName: \"kubernetes.io/projected/3d0c8bdd-05ea-4d1e-828e-576313c29620-kube-api-access-qdhxf\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058190 kubelet[2738]: I1213 01:20:10.058117 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-cilium-cgroup\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058190 kubelet[2738]: I1213 01:20:10.058136 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-etc-cni-netd\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058275 kubelet[2738]: I1213 01:20:10.058201 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-host-proc-sys-net\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058275 kubelet[2738]: I1213 01:20:10.058257 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-host-proc-sys-kernel\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058359 kubelet[2738]: I1213 01:20:10.058320 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-bpf-maps\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058383 kubelet[2738]: I1213 01:20:10.058369 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d0c8bdd-05ea-4d1e-828e-576313c29620-cilium-config-path\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058410 kubelet[2738]: I1213 01:20:10.058389 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d0c8bdd-05ea-4d1e-828e-576313c29620-hubble-tls\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058532 kubelet[2738]: I1213 01:20:10.058509 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-cilium-run\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058563 kubelet[2738]: I1213 01:20:10.058536 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-cni-path\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058563 kubelet[2738]: I1213 01:20:10.058555 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-xtables-lock\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058654 kubelet[2738]: I1213 01:20:10.058621 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d0c8bdd-05ea-4d1e-828e-576313c29620-clustermesh-secrets\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.058704 kubelet[2738]: I1213 01:20:10.058689 2738 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d0c8bdd-05ea-4d1e-828e-576313c29620-lib-modules\") pod \"cilium-27dd7\" (UID: \"3d0c8bdd-05ea-4d1e-828e-576313c29620\") " pod="kube-system/cilium-27dd7" Dec 13 01:20:10.059286 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:20:10.205793 kubelet[2738]: E1213 01:20:10.205685 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:10.206305 containerd[1574]: time="2024-12-13T01:20:10.206277127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27dd7,Uid:3d0c8bdd-05ea-4d1e-828e-576313c29620,Namespace:kube-system,Attempt:0,}" Dec 13 01:20:10.226238 containerd[1574]: time="2024-12-13T01:20:10.226123291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:20:10.226238 containerd[1574]: time="2024-12-13T01:20:10.226181220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:20:10.226238 containerd[1574]: time="2024-12-13T01:20:10.226195759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:20:10.226384 containerd[1574]: time="2024-12-13T01:20:10.226284657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:20:10.263283 containerd[1574]: time="2024-12-13T01:20:10.263235632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27dd7,Uid:3d0c8bdd-05ea-4d1e-828e-576313c29620,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\"" Dec 13 01:20:10.263901 kubelet[2738]: E1213 01:20:10.263871 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:10.266236 containerd[1574]: time="2024-12-13T01:20:10.266194127Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:20:10.279459 containerd[1574]: time="2024-12-13T01:20:10.279410474Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e16f05045d3c3636b699e1ba423ee94023c20dce23d70adc6a0261d69e2fbbc3\"" Dec 13 01:20:10.280564 containerd[1574]: time="2024-12-13T01:20:10.279901558Z" level=info msg="StartContainer for \"e16f05045d3c3636b699e1ba423ee94023c20dce23d70adc6a0261d69e2fbbc3\"" Dec 13 01:20:10.332633 containerd[1574]: time="2024-12-13T01:20:10.332591077Z" level=info msg="StartContainer for \"e16f05045d3c3636b699e1ba423ee94023c20dce23d70adc6a0261d69e2fbbc3\" returns successfully" Dec 13 01:20:10.373909 containerd[1574]: time="2024-12-13T01:20:10.373852827Z" level=info msg="shim disconnected" id=e16f05045d3c3636b699e1ba423ee94023c20dce23d70adc6a0261d69e2fbbc3 namespace=k8s.io Dec 13 01:20:10.373909 containerd[1574]: time="2024-12-13T01:20:10.373906279Z" level=warning msg="cleaning up after shim disconnected" id=e16f05045d3c3636b699e1ba423ee94023c20dce23d70adc6a0261d69e2fbbc3 namespace=k8s.io Dec 13 01:20:10.374108 containerd[1574]: time="2024-12-13T01:20:10.373915366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:10.833920 kubelet[2738]: E1213 01:20:10.833891 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:10.836027 containerd[1574]: time="2024-12-13T01:20:10.835765247Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:20:10.848100 containerd[1574]: time="2024-12-13T01:20:10.848057408Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9020e5e4b83fa369b09c97feccbb612dca1aaa443fd2605c9fcef45a69f4ed5f\"" Dec 13 01:20:10.848547 containerd[1574]: time="2024-12-13T01:20:10.848512974Z" level=info msg="StartContainer for \"9020e5e4b83fa369b09c97feccbb612dca1aaa443fd2605c9fcef45a69f4ed5f\"" Dec 13 01:20:10.896590 containerd[1574]: time="2024-12-13T01:20:10.896546139Z" level=info msg="StartContainer for \"9020e5e4b83fa369b09c97feccbb612dca1aaa443fd2605c9fcef45a69f4ed5f\" returns successfully" Dec 13 01:20:10.922763 containerd[1574]: time="2024-12-13T01:20:10.922700105Z" level=info msg="shim disconnected" id=9020e5e4b83fa369b09c97feccbb612dca1aaa443fd2605c9fcef45a69f4ed5f namespace=k8s.io Dec 13 01:20:10.922763 containerd[1574]: time="2024-12-13T01:20:10.922754278Z" level=warning msg="cleaning up after shim disconnected" id=9020e5e4b83fa369b09c97feccbb612dca1aaa443fd2605c9fcef45a69f4ed5f namespace=k8s.io Dec 13 01:20:10.922763 containerd[1574]: time="2024-12-13T01:20:10.922762945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:11.425369 kubelet[2738]: E1213 01:20:11.425341 2738 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:20:11.838475 kubelet[2738]: E1213 01:20:11.838454 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:11.840681 containerd[1574]: time="2024-12-13T01:20:11.840606041Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:20:11.859384 containerd[1574]: time="2024-12-13T01:20:11.859338581Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86\"" Dec 13 01:20:11.859832 containerd[1574]: time="2024-12-13T01:20:11.859801622Z" level=info msg="StartContainer for \"bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86\"" Dec 13 01:20:11.911553 containerd[1574]: time="2024-12-13T01:20:11.911515759Z" level=info msg="StartContainer for \"bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86\" returns successfully" Dec 13 01:20:11.937901 containerd[1574]: time="2024-12-13T01:20:11.937837886Z" level=info msg="shim disconnected" id=bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86 namespace=k8s.io Dec 13 01:20:11.937901 containerd[1574]: time="2024-12-13T01:20:11.937897610Z" level=warning msg="cleaning up after shim disconnected" id=bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86 namespace=k8s.io Dec 13 01:20:11.938103 containerd[1574]: time="2024-12-13T01:20:11.937906286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:12.166530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcd54a629e88f37be3c56ea8c9742b20e6a76f5248d1b568df47312e8ad9bf86-rootfs.mount: Deactivated successfully. Dec 13 01:20:12.842453 kubelet[2738]: E1213 01:20:12.842430 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:12.844310 containerd[1574]: time="2024-12-13T01:20:12.844253010Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:20:12.857144 containerd[1574]: time="2024-12-13T01:20:12.857098084Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee\"" Dec 13 01:20:12.857582 containerd[1574]: time="2024-12-13T01:20:12.857555103Z" level=info msg="StartContainer for \"ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee\"" Dec 13 01:20:12.906025 containerd[1574]: time="2024-12-13T01:20:12.905974291Z" level=info msg="StartContainer for \"ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee\" returns successfully" Dec 13 01:20:12.925238 containerd[1574]: time="2024-12-13T01:20:12.925184463Z" level=info msg="shim disconnected" id=ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee namespace=k8s.io Dec 13 01:20:12.925238 containerd[1574]: time="2024-12-13T01:20:12.925237323Z" level=warning msg="cleaning up after shim disconnected" id=ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee namespace=k8s.io Dec 13 01:20:12.925455 containerd[1574]: time="2024-12-13T01:20:12.925246732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:20:13.166169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab818fd7693bffe06e052447628af6ce262bf7d8d26b005b6db29e739482fbee-rootfs.mount: Deactivated successfully. Dec 13 01:20:13.551330 kubelet[2738]: I1213 01:20:13.551227 2738 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:20:13Z","lastTransitionTime":"2024-12-13T01:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:20:13.846848 kubelet[2738]: E1213 01:20:13.846704 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:13.853959 containerd[1574]: time="2024-12-13T01:20:13.853902862Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:20:13.865712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309867067.mount: Deactivated successfully. Dec 13 01:20:13.870117 containerd[1574]: time="2024-12-13T01:20:13.870081036Z" level=info msg="CreateContainer within sandbox \"aa9d81bf73088ee332b3836b72fd15f7d06b1b52eb1a36f348c5f3e184f18986\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86d0afe0791e801f70bec98777d8c7a54bcf2f6357a706089925ba5148f6b8e6\"" Dec 13 01:20:13.870619 containerd[1574]: time="2024-12-13T01:20:13.870583461Z" level=info msg="StartContainer for \"86d0afe0791e801f70bec98777d8c7a54bcf2f6357a706089925ba5148f6b8e6\"" Dec 13 01:20:13.922631 containerd[1574]: time="2024-12-13T01:20:13.922596033Z" level=info msg="StartContainer for \"86d0afe0791e801f70bec98777d8c7a54bcf2f6357a706089925ba5148f6b8e6\" returns successfully" Dec 13 01:20:14.307048 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:20:14.339041 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Dec 13 01:20:14.370050 kernel: DRBG: Continuing without Jitter RNG Dec 13 01:20:14.850971 kubelet[2738]: E1213 01:20:14.850934 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:16.207163 kubelet[2738]: E1213 01:20:16.207133 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:17.337377 systemd-networkd[1242]: lxc_health: Link UP Dec 13 01:20:17.347440 systemd-networkd[1242]: lxc_health: Gained carrier Dec 13 01:20:18.207732 kubelet[2738]: E1213 01:20:18.207695 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:18.219757 kubelet[2738]: I1213 01:20:18.219714 2738 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-27dd7" podStartSLOduration=9.219676126 podStartE2EDuration="9.219676126s" podCreationTimestamp="2024-12-13 01:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:20:14.862240361 +0000 UTC m=+83.584766748" watchObservedRunningTime="2024-12-13 01:20:18.219676126 +0000 UTC m=+86.942202513" Dec 13 01:20:18.858433 kubelet[2738]: E1213 01:20:18.858401 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:19.212252 systemd-networkd[1242]: lxc_health: Gained IPv6LL Dec 13 01:20:19.859455 kubelet[2738]: E1213 01:20:19.859420 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:22.371913 kubelet[2738]: E1213 01:20:22.371875 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:20:22.676443 sshd[4567]: pam_unix(sshd:session): session closed for user core Dec 13 01:20:22.680699 systemd[1]: sshd@26-10.0.0.160:22-10.0.0.1:54972.service: Deactivated successfully. Dec 13 01:20:22.683057 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:20:22.683687 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:20:22.684532 systemd-logind[1549]: Removed session 27. Dec 13 01:20:23.371707 kubelet[2738]: E1213 01:20:23.371683 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"