Dec 13 01:11:29.909110 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:11:29.909132 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:11:29.909143 kernel: BIOS-provided physical RAM map: Dec 13 01:11:29.909149 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:11:29.909155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:11:29.909161 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:11:29.909168 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:11:29.909175 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:11:29.909181 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:11:29.909190 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:11:29.909196 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:11:29.909202 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:11:29.909208 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:11:29.909214 kernel: NX (Execute Disable) protection: active Dec 13 01:11:29.909222 kernel: APIC: Static calls initialized Dec 13 01:11:29.909231 kernel: SMBIOS 2.8 present. Dec 13 01:11:29.909238 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:11:29.909244 kernel: Hypervisor detected: KVM Dec 13 01:11:29.909251 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:11:29.909257 kernel: kvm-clock: using sched offset of 2226725765 cycles Dec 13 01:11:29.909264 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:11:29.909272 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:11:29.909279 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:11:29.909286 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:11:29.909293 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:11:29.909302 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:11:29.909309 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:11:29.909316 kernel: Using GB pages for direct mapping Dec 13 01:11:29.909322 kernel: ACPI: Early table checksum verification disabled Dec 13 01:11:29.909329 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:11:29.909336 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909343 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909350 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909359 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:11:29.909366 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909373 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909379 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909386 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:11:29.909393 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:11:29.909400 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:11:29.909410 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:11:29.909419 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:11:29.909426 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:11:29.909433 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:11:29.909440 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:11:29.909447 kernel: No NUMA configuration found Dec 13 01:11:29.909454 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:11:29.909461 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:11:29.909471 kernel: Zone ranges: Dec 13 01:11:29.909621 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:11:29.909628 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:11:29.909635 kernel: Normal empty Dec 13 01:11:29.909642 kernel: Movable zone start for each node Dec 13 01:11:29.909649 kernel: Early memory node ranges Dec 13 01:11:29.909656 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:11:29.909663 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:11:29.909670 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:11:29.909681 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:11:29.909689 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:11:29.909696 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:11:29.909703 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:11:29.909710 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:11:29.909717 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:11:29.909724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:11:29.909731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:11:29.909738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:11:29.909747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:11:29.909754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:11:29.909761 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:11:29.909768 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:11:29.909775 kernel: TSC deadline timer available Dec 13 01:11:29.909782 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:11:29.909789 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:11:29.909797 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:11:29.909804 kernel: kvm-guest: setup PV sched yield Dec 13 01:11:29.909811 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:11:29.909820 kernel: Booting paravirtualized kernel on KVM Dec 13 01:11:29.909829 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:11:29.909838 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:11:29.909848 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:11:29.909857 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:11:29.909865 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:11:29.909874 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:11:29.909884 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:11:29.909895 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:11:29.909908 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:11:29.909918 kernel: random: crng init done Dec 13 01:11:29.909928 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:11:29.909937 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:11:29.909947 kernel: Fallback order for Node 0: 0 Dec 13 01:11:29.909956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:11:29.909963 kernel: Policy zone: DMA32 Dec 13 01:11:29.909970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:11:29.909980 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:11:29.909988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:11:29.909995 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:11:29.910002 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:11:29.910009 kernel: Dynamic Preempt: voluntary Dec 13 01:11:29.910016 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:11:29.910024 kernel: rcu: RCU event tracing is enabled. Dec 13 01:11:29.910031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:11:29.910039 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:11:29.910048 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:11:29.910057 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:11:29.910066 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:11:29.910076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:11:29.910085 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:11:29.910095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:11:29.910104 kernel: Console: colour VGA+ 80x25 Dec 13 01:11:29.910111 kernel: printk: console [ttyS0] enabled Dec 13 01:11:29.910118 kernel: ACPI: Core revision 20230628 Dec 13 01:11:29.910128 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:11:29.910135 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:11:29.910142 kernel: x2apic enabled Dec 13 01:11:29.910149 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:11:29.910156 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:11:29.910164 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:11:29.910171 kernel: kvm-guest: setup PV IPIs Dec 13 01:11:29.910191 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:11:29.910201 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:11:29.910212 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:11:29.910222 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:11:29.910232 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:11:29.910243 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:11:29.910251 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:11:29.910259 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:11:29.910267 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:11:29.910277 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:11:29.910284 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:11:29.910294 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:11:29.910304 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:11:29.910315 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:11:29.910325 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:11:29.910336 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:11:29.910346 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:11:29.910356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:11:29.910369 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:11:29.910379 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:11:29.910390 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:11:29.910400 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:11:29.910410 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:11:29.910420 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:11:29.910430 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:11:29.910440 kernel: landlock: Up and running. Dec 13 01:11:29.910450 kernel: SELinux: Initializing. Dec 13 01:11:29.910463 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:11:29.910486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:11:29.910504 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:11:29.910514 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:11:29.910524 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:11:29.910535 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:11:29.910545 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:11:29.910555 kernel: ... version: 0 Dec 13 01:11:29.910569 kernel: ... bit width: 48 Dec 13 01:11:29.910580 kernel: ... generic registers: 6 Dec 13 01:11:29.910590 kernel: ... value mask: 0000ffffffffffff Dec 13 01:11:29.910600 kernel: ... max period: 00007fffffffffff Dec 13 01:11:29.910610 kernel: ... fixed-purpose events: 0 Dec 13 01:11:29.910620 kernel: ... event mask: 000000000000003f Dec 13 01:11:29.910631 kernel: signal: max sigframe size: 1776 Dec 13 01:11:29.910641 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:11:29.910651 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:11:29.910661 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:11:29.910674 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:11:29.910684 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:11:29.910694 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:11:29.910704 kernel: smpboot: Max logical packages: 1 Dec 13 01:11:29.910714 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:11:29.910724 kernel: devtmpfs: initialized Dec 13 01:11:29.910734 kernel: x86/mm: Memory block size: 128MB Dec 13 01:11:29.910744 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:11:29.910755 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:11:29.910768 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:11:29.910778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:11:29.910788 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:11:29.910799 kernel: audit: type=2000 audit(1734052289.711:1): state=initialized audit_enabled=0 res=1 Dec 13 01:11:29.910809 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:11:29.910819 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:11:29.910829 kernel: cpuidle: using governor menu Dec 13 01:11:29.910839 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:11:29.910849 kernel: dca service started, version 1.12.1 Dec 13 01:11:29.910862 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:11:29.910873 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:11:29.910883 kernel: PCI: Using configuration type 1 for base access Dec 13 01:11:29.910893 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:11:29.910903 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:11:29.910913 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:11:29.910923 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:11:29.910933 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:11:29.910943 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:11:29.910956 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:11:29.910966 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:11:29.910976 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:11:29.910986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:11:29.910996 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:11:29.911006 kernel: ACPI: Interpreter enabled Dec 13 01:11:29.911016 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:11:29.911026 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:11:29.911036 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:11:29.911049 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:11:29.911059 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:11:29.911069 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:11:29.911276 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:11:29.911429 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:11:29.911606 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:11:29.911622 kernel: PCI host bridge to bus 0000:00 Dec 13 01:11:29.911780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:11:29.911916 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:11:29.912050 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:11:29.912181 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:11:29.912312 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:11:29.912445 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:11:29.912616 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:11:29.912783 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:11:29.912914 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:11:29.913036 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:11:29.913156 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:11:29.913276 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:11:29.913393 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:11:29.913557 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:11:29.913704 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:11:29.913854 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:11:29.913987 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:11:29.914125 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:11:29.914245 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:11:29.914364 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:11:29.914514 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:11:29.914652 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:11:29.914774 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:11:29.914900 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:11:29.915035 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:11:29.915169 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:11:29.915313 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:11:29.915441 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:11:29.915604 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:11:29.915728 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:11:29.915847 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:11:29.915976 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:11:29.916110 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:11:29.916126 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:11:29.916134 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:11:29.916142 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:11:29.916149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:11:29.916157 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:11:29.916165 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:11:29.916173 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:11:29.916180 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:11:29.916188 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:11:29.916198 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:11:29.916206 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:11:29.916214 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:11:29.916221 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:11:29.916229 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:11:29.916237 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:11:29.916244 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:11:29.916252 kernel: iommu: Default domain type: Translated Dec 13 01:11:29.916260 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:11:29.916271 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:11:29.916278 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:11:29.916286 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:11:29.916293 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:11:29.916432 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:11:29.916597 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:11:29.916722 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:11:29.916733 kernel: vgaarb: loaded Dec 13 01:11:29.916741 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:11:29.916753 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:11:29.916762 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:11:29.916769 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:11:29.916777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:11:29.916785 kernel: pnp: PnP ACPI init Dec 13 01:11:29.916913 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:11:29.916925 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:11:29.916933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:11:29.916945 kernel: NET: Registered PF_INET protocol family Dec 13 01:11:29.916953 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:11:29.916961 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:11:29.916969 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:11:29.916978 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:11:29.916986 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:11:29.916994 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:11:29.917001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:11:29.917010 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:11:29.917020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:11:29.917028 kernel: NET: Registered PF_XDP protocol family Dec 13 01:11:29.917152 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:11:29.917271 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:11:29.917392 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:11:29.917529 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:11:29.917643 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:11:29.917753 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:11:29.917768 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:11:29.917776 kernel: Initialise system trusted keyrings Dec 13 01:11:29.917783 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:11:29.917791 kernel: Key type asymmetric registered Dec 13 01:11:29.917798 kernel: Asymmetric key parser 'x509' registered Dec 13 01:11:29.917806 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:11:29.917814 kernel: io scheduler mq-deadline registered Dec 13 01:11:29.917821 kernel: io scheduler kyber registered Dec 13 01:11:29.917829 kernel: io scheduler bfq registered Dec 13 01:11:29.917839 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:11:29.917847 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:11:29.917855 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:11:29.917862 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:11:29.917870 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:11:29.917878 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:11:29.917885 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:11:29.917893 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:11:29.917900 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:11:29.917911 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:11:29.918035 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:11:29.918156 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:11:29.918281 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:11:29 UTC (1734052289) Dec 13 01:11:29.918395 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:11:29.918405 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:11:29.918413 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:11:29.918421 kernel: Segment Routing with IPv6 Dec 13 01:11:29.918432 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:11:29.918440 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:11:29.918448 kernel: Key type dns_resolver registered Dec 13 01:11:29.918455 kernel: IPI shorthand broadcast: enabled Dec 13 01:11:29.918463 kernel: sched_clock: Marking stable (602002011, 106735555)->(762171714, -53434148) Dec 13 01:11:29.918508 kernel: registered taskstats version 1 Dec 13 01:11:29.918516 kernel: Loading compiled-in X.509 certificates Dec 13 01:11:29.918524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:11:29.918532 kernel: Key type .fscrypt registered Dec 13 01:11:29.918543 kernel: Key type fscrypt-provisioning registered Dec 13 01:11:29.918550 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:11:29.918558 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:11:29.918566 kernel: ima: No architecture policies found Dec 13 01:11:29.918573 kernel: clk: Disabling unused clocks Dec 13 01:11:29.918581 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:11:29.918589 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:11:29.918597 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:11:29.918605 kernel: Run /init as init process Dec 13 01:11:29.918615 kernel: with arguments: Dec 13 01:11:29.918623 kernel: /init Dec 13 01:11:29.918630 kernel: with environment: Dec 13 01:11:29.918638 kernel: HOME=/ Dec 13 01:11:29.918646 kernel: TERM=linux Dec 13 01:11:29.918653 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:11:29.918664 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:11:29.918674 systemd[1]: Detected virtualization kvm. Dec 13 01:11:29.918684 systemd[1]: Detected architecture x86-64. Dec 13 01:11:29.918692 systemd[1]: Running in initrd. Dec 13 01:11:29.918816 systemd[1]: No hostname configured, using default hostname. Dec 13 01:11:29.918824 systemd[1]: Hostname set to . Dec 13 01:11:29.918832 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:11:29.918841 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:11:29.918849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:11:29.918857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:11:29.918869 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:11:29.918889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:11:29.918900 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:11:29.918909 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:11:29.918919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:11:29.918930 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:11:29.918938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:11:29.918947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:11:29.918958 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:11:29.918966 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:11:29.918975 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:11:29.918983 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:11:29.918991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:11:29.919002 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:11:29.919011 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:11:29.919019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:11:29.919028 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:11:29.919036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:11:29.919045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:11:29.919053 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:11:29.919062 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:11:29.919072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:11:29.919081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:11:29.919090 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:11:29.919098 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:11:29.919107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:11:29.919115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:11:29.919124 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:11:29.919133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:11:29.919141 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:11:29.919153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:11:29.919182 systemd-journald[191]: Collecting audit messages is disabled. Dec 13 01:11:29.919205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:11:29.919215 systemd-journald[191]: Journal started Dec 13 01:11:29.919242 systemd-journald[191]: Runtime Journal (/run/log/journal/929f07a442f54c6fbd6418ced67a4c0d) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:11:29.923030 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:11:29.955842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:11:29.955863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:11:29.955875 kernel: Bridge firewalling registered Dec 13 01:11:29.955557 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:11:29.966505 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:11:29.967203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:11:29.968664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:11:29.973211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:11:29.981704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:11:29.983760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:11:29.985392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:11:29.997222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:11:29.999546 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:11:30.001622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:11:30.016695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:11:30.019993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:11:30.027843 dracut-cmdline[228]: dracut-dracut-053 Dec 13 01:11:30.034384 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:11:30.065696 systemd-resolved[232]: Positive Trust Anchors: Dec 13 01:11:30.065712 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:11:30.065742 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:11:30.068193 systemd-resolved[232]: Defaulting to hostname 'linux'. Dec 13 01:11:30.069262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:11:30.074953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:11:30.129525 kernel: SCSI subsystem initialized Dec 13 01:11:30.140524 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:11:30.151517 kernel: iscsi: registered transport (tcp) Dec 13 01:11:30.173683 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:11:30.173758 kernel: QLogic iSCSI HBA Driver Dec 13 01:11:30.224043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:11:30.326599 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:11:30.352295 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:11:30.352344 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:11:30.352361 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:11:30.392510 kernel: raid6: avx2x4 gen() 26758 MB/s Dec 13 01:11:30.409514 kernel: raid6: avx2x2 gen() 27027 MB/s Dec 13 01:11:30.426600 kernel: raid6: avx2x1 gen() 23182 MB/s Dec 13 01:11:30.426622 kernel: raid6: using algorithm avx2x2 gen() 27027 MB/s Dec 13 01:11:30.444586 kernel: raid6: .... xor() 19946 MB/s, rmw enabled Dec 13 01:11:30.444617 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:11:30.465507 kernel: xor: automatically using best checksumming function avx Dec 13 01:11:30.616509 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:11:30.630103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:11:30.640740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:11:30.655294 systemd-udevd[415]: Using default interface naming scheme 'v255'. Dec 13 01:11:30.675197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:11:30.684677 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:11:30.697434 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Dec 13 01:11:30.730175 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:11:30.742726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:11:30.804950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:11:30.814951 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:11:30.827643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:11:30.830760 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:11:30.833238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:11:30.835705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:11:30.848723 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:11:30.873882 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:11:30.875922 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:11:30.876068 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:11:30.876081 kernel: libata version 3.00 loaded. Dec 13 01:11:30.876092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:11:30.876106 kernel: GPT:9289727 != 19775487 Dec 13 01:11:30.876116 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:11:30.876126 kernel: GPT:9289727 != 19775487 Dec 13 01:11:30.876136 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:11:30.876145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:11:30.871879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:11:30.885744 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:11:30.928504 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:11:30.928530 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:11:30.928695 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:11:30.928833 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:11:30.928844 kernel: AES CTR mode by8 optimization enabled Dec 13 01:11:30.928855 kernel: scsi host0: ahci Dec 13 01:11:30.929016 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (477) Dec 13 01:11:30.929029 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Dec 13 01:11:30.929043 kernel: scsi host1: ahci Dec 13 01:11:30.929230 kernel: scsi host2: ahci Dec 13 01:11:30.929421 kernel: scsi host3: ahci Dec 13 01:11:30.929670 kernel: scsi host4: ahci Dec 13 01:11:30.929847 kernel: scsi host5: ahci Dec 13 01:11:30.930204 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:11:30.930221 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:11:30.930240 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:11:30.930255 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:11:30.930268 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:11:30.930282 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:11:30.886834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:11:30.886977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:11:30.890289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:11:30.891688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:11:30.891843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:11:30.893052 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:11:30.901782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:11:30.937620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:11:30.970974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:11:30.972312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:11:30.979230 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:11:30.979763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:11:30.987153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:11:31.000658 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:11:31.003166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:11:31.010853 disk-uuid[558]: Primary Header is updated. Dec 13 01:11:31.010853 disk-uuid[558]: Secondary Entries is updated. Dec 13 01:11:31.010853 disk-uuid[558]: Secondary Header is updated. Dec 13 01:11:31.014567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:11:31.018507 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:11:31.027136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:11:31.240856 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:11:31.240932 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:11:31.242404 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:11:31.242434 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:11:31.242514 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:11:31.243512 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:11:31.244503 kernel: ata3.00: applying bridge limits Dec 13 01:11:31.244520 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:11:31.245510 kernel: ata3.00: configured for UDMA/100 Dec 13 01:11:31.246506 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:11:31.299519 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:11:31.317281 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:11:31.317297 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:11:32.021510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:11:32.021809 disk-uuid[559]: The operation has completed successfully. Dec 13 01:11:32.065929 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:11:32.066170 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:11:32.101749 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:11:32.104763 sh[594]: Success Dec 13 01:11:32.117511 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:11:32.151672 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:11:32.166103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:11:32.169624 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:11:32.180212 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:11:32.180243 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:11:32.180254 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:11:32.181264 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:11:32.182859 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:11:32.187564 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:11:32.190124 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:11:32.205758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:11:32.207881 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:11:32.217824 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:11:32.217867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:11:32.217880 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:11:32.221510 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:11:32.232159 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:11:32.234571 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:11:32.264687 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:11:32.272661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:11:32.329492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:11:32.345269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:11:32.346612 ignition[716]: Ignition 2.19.0 Dec 13 01:11:32.346620 ignition[716]: Stage: fetch-offline Dec 13 01:11:32.346659 ignition[716]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:32.346668 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:32.346753 ignition[716]: parsed url from cmdline: "" Dec 13 01:11:32.346757 ignition[716]: no config URL provided Dec 13 01:11:32.346762 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:11:32.346770 ignition[716]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:11:32.346796 ignition[716]: op(1): [started] loading QEMU firmware config module Dec 13 01:11:32.346801 ignition[716]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:11:32.355247 ignition[716]: op(1): [finished] loading QEMU firmware config module Dec 13 01:11:32.375750 systemd-networkd[779]: lo: Link UP Dec 13 01:11:32.375759 systemd-networkd[779]: lo: Gained carrier Dec 13 01:11:32.378918 systemd-networkd[779]: Enumeration completed Dec 13 01:11:32.378996 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:11:32.380114 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:11:32.380120 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:11:32.381086 systemd-networkd[779]: eth0: Link UP Dec 13 01:11:32.381091 systemd-networkd[779]: eth0: Gained carrier Dec 13 01:11:32.381099 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:11:32.382339 systemd[1]: Reached target network.target - Network. Dec 13 01:11:32.396545 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:11:32.406872 ignition[716]: parsing config with SHA512: 6d6983ffd04f71cece026bc8ae2e4e11ba30809dd00004ee47b19580a9bb68ecb6703de4bcad720832a902bcd5291341fc5bc6ff08e37c24de6bfabaf183be94 Dec 13 01:11:32.413426 unknown[716]: fetched base config from "system" Dec 13 01:11:32.414532 unknown[716]: fetched user config from "qemu" Dec 13 01:11:32.414960 ignition[716]: fetch-offline: fetch-offline passed Dec 13 01:11:32.415033 ignition[716]: Ignition finished successfully Dec 13 01:11:32.417703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:11:32.420123 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:11:32.429615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:11:32.442337 ignition[786]: Ignition 2.19.0 Dec 13 01:11:32.442347 ignition[786]: Stage: kargs Dec 13 01:11:32.442534 ignition[786]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:32.442545 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:32.443284 ignition[786]: kargs: kargs passed Dec 13 01:11:32.443331 ignition[786]: Ignition finished successfully Dec 13 01:11:32.446514 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:11:32.457645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:11:32.468904 ignition[793]: Ignition 2.19.0 Dec 13 01:11:32.468914 ignition[793]: Stage: disks Dec 13 01:11:32.469088 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:32.469099 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:32.469964 ignition[793]: disks: disks passed Dec 13 01:11:32.472355 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:11:32.470008 ignition[793]: Ignition finished successfully Dec 13 01:11:32.473636 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:11:32.475163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:11:32.475767 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:11:32.476101 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:11:32.476433 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:11:32.489617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:11:32.505345 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:11:32.715537 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:11:32.726670 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:11:32.823500 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:11:32.823667 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:11:32.825996 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:11:32.841641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:11:32.843694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:11:32.846282 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:11:32.846353 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:11:32.853795 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Dec 13 01:11:32.853828 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:11:32.846388 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:11:32.858096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:11:32.858114 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:11:32.859501 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:11:32.880943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:11:32.886559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:11:32.888021 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:11:32.928577 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:11:32.933994 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:11:32.939022 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:11:32.943817 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:11:33.032419 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:11:33.050786 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:11:33.054656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:11:33.058498 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:11:33.077938 ignition[924]: INFO : Ignition 2.19.0 Dec 13 01:11:33.077938 ignition[924]: INFO : Stage: mount Dec 13 01:11:33.080904 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:33.080904 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:33.080904 ignition[924]: INFO : mount: mount passed Dec 13 01:11:33.080904 ignition[924]: INFO : Ignition finished successfully Dec 13 01:11:33.079877 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:11:33.081127 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:11:33.087562 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:11:33.179627 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:11:33.195671 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:11:33.202939 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Dec 13 01:11:33.202972 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:11:33.202987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:11:33.204501 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:11:33.207503 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:11:33.208322 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:11:33.228337 ignition[954]: INFO : Ignition 2.19.0 Dec 13 01:11:33.228337 ignition[954]: INFO : Stage: files Dec 13 01:11:33.230124 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:33.230124 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:33.232775 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:11:33.234582 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:11:33.234582 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:11:33.238400 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:11:33.239823 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:11:33.239823 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:11:33.239068 unknown[954]: wrote ssh authorized keys file for user: core Dec 13 01:11:33.243699 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:11:33.243699 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:11:33.279609 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:11:33.368569 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:11:33.368569 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:11:33.372616 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:11:33.908053 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:11:33.979399 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:11:33.981336 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:11:34.353643 systemd-networkd[779]: eth0: Gained IPv6LL Dec 13 01:11:34.422032 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:11:34.771142 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:11:34.771142 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:11:34.775560 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:11:34.797786 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:11:34.803390 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:11:34.805252 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:11:34.805252 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:11:34.805252 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:11:34.805252 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:11:34.805252 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:11:34.805252 ignition[954]: INFO : files: files passed Dec 13 01:11:34.805252 ignition[954]: INFO : Ignition finished successfully Dec 13 01:11:34.806409 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:11:34.814681 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:11:34.816393 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:11:34.818130 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:11:34.818252 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:11:34.826196 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:11:34.828689 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:11:34.830308 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:11:34.833031 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:11:34.831445 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:11:34.833206 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:11:34.843631 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:11:34.866047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:11:34.866187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:11:34.868874 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:11:34.870545 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:11:34.872601 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:11:34.873337 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:11:34.892122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:11:34.894834 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:11:34.910332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:11:34.911665 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:11:34.913906 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:11:34.915905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:11:34.916018 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:11:34.918127 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:11:34.919894 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:11:34.921963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:11:34.923968 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:11:34.925998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:11:34.928181 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:11:34.930324 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:11:34.932577 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:11:34.934565 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:11:34.936779 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:11:34.938576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:11:34.938725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:11:34.940796 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:11:34.942492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:11:34.944584 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:11:34.944707 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:11:34.946772 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:11:34.946887 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:11:34.949065 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:11:34.949203 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:11:34.951254 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:11:34.952969 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:11:34.956539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:11:34.958520 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:11:34.960505 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:11:34.962265 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:11:34.962361 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:11:34.964299 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:11:34.964387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:11:34.966733 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:11:34.966844 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:11:34.968804 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:11:34.968908 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:11:34.986640 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:11:34.988560 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:11:34.988684 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:11:34.992077 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:11:34.993886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:11:34.994163 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:11:34.997177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:11:34.997395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:11:35.000351 ignition[1008]: INFO : Ignition 2.19.0 Dec 13 01:11:35.000351 ignition[1008]: INFO : Stage: umount Dec 13 01:11:35.000351 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:11:35.000351 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:11:35.000351 ignition[1008]: INFO : umount: umount passed Dec 13 01:11:35.000351 ignition[1008]: INFO : Ignition finished successfully Dec 13 01:11:35.002580 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:11:35.002687 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:11:35.004999 systemd[1]: Stopped target network.target - Network. Dec 13 01:11:35.006592 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:11:35.006653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:11:35.008536 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:11:35.009410 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:11:35.011378 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:11:35.011434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:11:35.013389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:11:35.014274 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:11:35.018279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:11:35.021973 systemd-networkd[779]: eth0: DHCPv6 lease lost Dec 13 01:11:35.022380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:11:35.029357 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:11:35.030908 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:11:35.031919 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:11:35.035096 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:11:35.036146 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:11:35.038611 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:11:35.039644 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:11:35.044558 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:11:35.045497 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:11:35.060617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:11:35.062466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:11:35.062546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:11:35.066040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:11:35.066091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:11:35.069052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:11:35.069119 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:11:35.072578 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:11:35.073582 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:11:35.076299 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:11:35.088313 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:11:35.089407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:11:35.098121 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:11:35.099238 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:11:35.102016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:11:35.103012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:11:35.105087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:11:35.106044 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:11:35.108126 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:11:35.109036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:11:35.111185 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:11:35.112115 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:11:35.114139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:11:35.115114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:11:35.124642 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:11:35.125078 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:11:35.125139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:11:35.125456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:11:35.125520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:11:35.132549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:11:35.132689 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:11:35.188050 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:11:35.188202 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:11:35.190206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:11:35.190771 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:11:35.190828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:11:35.201680 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:11:35.209278 systemd[1]: Switching root. Dec 13 01:11:35.241961 systemd-journald[191]: Journal stopped Dec 13 01:11:36.970865 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 13 01:11:36.970946 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:11:36.970969 kernel: SELinux: policy capability open_perms=1 Dec 13 01:11:36.970985 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:11:36.971001 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:11:36.971022 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:11:36.971042 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:11:36.971057 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:11:36.971075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:11:36.971091 kernel: audit: type=1403 audit(1734052296.181:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:11:36.971107 systemd[1]: Successfully loaded SELinux policy in 38.591ms. Dec 13 01:11:36.971136 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.381ms. Dec 13 01:11:36.971153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:11:36.971170 systemd[1]: Detected virtualization kvm. Dec 13 01:11:36.971186 systemd[1]: Detected architecture x86-64. Dec 13 01:11:36.971205 systemd[1]: Detected first boot. Dec 13 01:11:36.971220 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:11:36.971236 zram_generator::config[1052]: No configuration found. Dec 13 01:11:36.971253 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:11:36.971268 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:11:36.971284 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:11:36.971301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:11:36.971317 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:11:36.971336 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:11:36.971352 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:11:36.971368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:11:36.971385 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:11:36.971411 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:11:36.971428 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:11:36.971446 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:11:36.971462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:11:36.971496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:11:36.971514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:11:36.971529 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:11:36.971545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:11:36.971562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:11:36.971577 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:11:36.971593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:11:36.971609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:11:36.971625 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:11:36.971644 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:11:36.971659 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:11:36.971675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:11:36.971691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:11:36.971706 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:11:36.971721 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:11:36.971737 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:11:36.971753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:11:36.971777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:11:36.971793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:11:36.971811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:11:36.971826 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:11:36.971842 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:11:36.971857 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:11:36.971873 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:11:36.971889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:36.971904 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:11:36.971923 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:11:36.971940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:11:36.971956 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:11:36.971972 systemd[1]: Reached target machines.target - Containers. Dec 13 01:11:36.971987 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:11:36.972003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:11:36.972019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:11:36.972034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:11:36.972053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:11:36.972069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:11:36.972085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:11:36.972101 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:11:36.972117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:11:36.972133 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:11:36.972152 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:11:36.972169 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:11:36.973384 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:11:36.973416 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:11:36.973433 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:11:36.973449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:11:36.973465 kernel: loop: module loaded Dec 13 01:11:36.973492 kernel: fuse: init (API version 7.39) Dec 13 01:11:36.973509 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:11:36.973525 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:11:36.973541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:11:36.973558 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:11:36.973578 systemd[1]: Stopped verity-setup.service. Dec 13 01:11:36.973595 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:36.973611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:11:36.973626 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:11:36.973642 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:11:36.973690 systemd-journald[1133]: Collecting audit messages is disabled. Dec 13 01:11:36.973724 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:11:36.973740 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:11:36.973756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:11:36.973773 systemd-journald[1133]: Journal started Dec 13 01:11:36.973804 systemd-journald[1133]: Runtime Journal (/run/log/journal/929f07a442f54c6fbd6418ced67a4c0d) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:11:36.676904 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:11:36.704154 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:11:36.704626 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:11:36.977814 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:11:36.978841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:11:36.980383 kernel: ACPI: bus type drm_connector registered Dec 13 01:11:36.980725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:11:36.980931 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:11:36.982662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:11:36.982856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:11:36.984532 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:11:36.984732 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:11:36.986277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:11:36.986506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:11:36.988287 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:11:36.989890 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:11:36.990073 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:11:36.991614 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:11:36.991809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:11:36.993346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:11:36.994887 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:11:36.996545 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:11:37.008618 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:11:37.014545 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:11:37.016874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:11:37.018161 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:11:37.018188 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:11:37.020270 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:11:37.022568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:11:37.027833 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:11:37.029655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:11:37.031863 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:11:37.036921 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:11:37.038692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:11:37.044219 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:11:37.046322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:11:37.049568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:11:37.057411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:11:37.065620 systemd-journald[1133]: Time spent on flushing to /var/log/journal/929f07a442f54c6fbd6418ced67a4c0d is 17.316ms for 953 entries. Dec 13 01:11:37.065620 systemd-journald[1133]: System Journal (/var/log/journal/929f07a442f54c6fbd6418ced67a4c0d) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:11:37.165771 systemd-journald[1133]: Received client request to flush runtime journal. Dec 13 01:11:37.165890 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:11:37.165977 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:11:37.065823 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:11:37.070164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:11:37.072008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:11:37.072563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:11:37.074788 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:11:37.086658 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:11:37.115243 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:11:37.125596 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:11:37.127625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:11:37.134691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:11:37.137181 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:11:37.140082 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:11:37.145795 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:11:37.168246 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:11:37.172039 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Dec 13 01:11:37.173586 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:11:37.172064 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Dec 13 01:11:37.178332 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:11:37.178969 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:11:37.181688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:11:37.211512 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:11:37.250499 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:11:37.263498 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 01:11:37.270495 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:11:37.277793 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:11:37.278511 (sd-merge)[1191]: Merged extensions into '/usr'. Dec 13 01:11:37.283153 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:11:37.283172 systemd[1]: Reloading... Dec 13 01:11:37.344777 zram_generator::config[1220]: No configuration found. Dec 13 01:11:37.442953 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:11:37.486094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:11:37.548871 systemd[1]: Reloading finished in 265 ms. Dec 13 01:11:37.580876 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:11:37.582450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:11:37.603733 systemd[1]: Starting ensure-sysext.service... Dec 13 01:11:37.607648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:11:37.612549 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:11:37.612567 systemd[1]: Reloading... Dec 13 01:11:37.628575 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:11:37.628941 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:11:37.630011 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:11:37.630310 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Dec 13 01:11:37.630421 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Dec 13 01:11:37.634242 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:11:37.634341 systemd-tmpfiles[1255]: Skipping /boot Dec 13 01:11:37.649628 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:11:37.649764 systemd-tmpfiles[1255]: Skipping /boot Dec 13 01:11:37.681530 zram_generator::config[1282]: No configuration found. Dec 13 01:11:37.790983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:11:37.840411 systemd[1]: Reloading finished in 227 ms. Dec 13 01:11:37.861087 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:11:37.873921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:11:37.881059 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:11:37.883431 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:11:37.886642 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:11:37.890871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:11:37.894164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:11:37.897307 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:11:37.902061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:37.902232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:11:37.904518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:11:37.909311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:11:37.913032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:11:37.914268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:11:37.918057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:11:37.918798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:37.922422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:11:37.922635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:11:37.924441 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:11:37.924681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:11:37.928994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:11:37.929175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:11:37.931769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:11:37.935825 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Dec 13 01:11:37.940129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:37.940339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:11:37.946774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:11:37.949861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:11:37.952333 augenrules[1351]: No rules Dec 13 01:11:37.954279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:11:37.955457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:11:37.957618 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:11:37.958784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:37.960149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:11:37.962589 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:11:37.964356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:11:37.964550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:11:37.966520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:11:37.966698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:11:37.968806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:11:37.970967 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:11:37.971157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:11:37.973314 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:11:37.982598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:11:37.989626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:11:38.004256 systemd[1]: Finished ensure-sysext.service. Dec 13 01:11:38.008401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:38.010180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:11:38.014692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:11:38.017794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:11:38.021903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:11:38.028621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:11:38.029504 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Dec 13 01:11:38.031516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:11:38.032507 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1363) Dec 13 01:11:38.034253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:11:38.041898 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:11:38.043168 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:11:38.043203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:11:38.043987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:11:38.044540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:11:38.048908 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:11:38.049086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:11:38.050503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:11:38.050674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:11:38.052327 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:11:38.052517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:11:38.058503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Dec 13 01:11:38.066108 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:11:38.074729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:11:38.074797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:11:38.081035 systemd-resolved[1325]: Positive Trust Anchors: Dec 13 01:11:38.081054 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:11:38.081087 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:11:38.087737 systemd-resolved[1325]: Defaulting to hostname 'linux'. Dec 13 01:11:38.090898 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:11:38.092201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:11:38.104494 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:11:38.108598 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:11:38.116273 systemd-networkd[1396]: lo: Link UP Dec 13 01:11:38.116289 systemd-networkd[1396]: lo: Gained carrier Dec 13 01:11:38.117977 systemd-networkd[1396]: Enumeration completed Dec 13 01:11:38.118067 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:11:38.119300 systemd[1]: Reached target network.target - Network. Dec 13 01:11:38.120628 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:11:38.120632 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:11:38.121309 systemd-networkd[1396]: eth0: Link UP Dec 13 01:11:38.121318 systemd-networkd[1396]: eth0: Gained carrier Dec 13 01:11:38.121329 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:11:38.129687 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:11:38.135250 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:11:38.135844 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:11:38.136066 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:11:38.139758 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:11:38.146654 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:11:38.148218 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:11:38.994932 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:11:38.995233 systemd-resolved[1325]: Clock change detected. Flushing caches. Dec 13 01:11:38.995318 systemd-timesyncd[1397]: Initial clock synchronization to Fri 2024-12-13 01:11:38.994842 UTC. Dec 13 01:11:38.996131 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:11:39.004285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:11:39.020475 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:11:39.022477 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:11:39.031104 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:11:39.040336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:11:39.044066 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:11:39.126511 kernel: kvm_amd: TSC scaling supported Dec 13 01:11:39.126604 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:11:39.126618 kernel: kvm_amd: Nested Paging enabled Dec 13 01:11:39.126640 kernel: kvm_amd: LBR virtualization supported Dec 13 01:11:39.126653 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:11:39.126664 kernel: kvm_amd: Virtual GIF supported Dec 13 01:11:39.132411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:11:39.150122 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:11:39.185557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:11:39.198349 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:11:39.206990 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:11:39.239344 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:11:39.240884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:11:39.242040 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:11:39.243256 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:11:39.244526 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:11:39.245953 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:11:39.248955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:11:39.250238 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:11:39.251556 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:11:39.251586 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:11:39.252498 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:11:39.254349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:11:39.256958 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:11:39.270721 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:11:39.273098 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:11:39.274964 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:11:39.276190 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:11:39.277179 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:11:39.278153 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:11:39.278181 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:11:39.279146 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:11:39.281248 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:11:39.285370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:11:39.289660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:11:39.290847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:11:39.292004 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:11:39.292419 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:11:39.296228 jq[1432]: false Dec 13 01:11:39.297956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:11:39.310462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:11:39.312767 extend-filesystems[1433]: Found loop3 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found loop4 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found loop5 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found sr0 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda1 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda2 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda3 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found usr Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda4 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda6 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda7 Dec 13 01:11:39.313713 extend-filesystems[1433]: Found vda9 Dec 13 01:11:39.313713 extend-filesystems[1433]: Checking size of /dev/vda9 Dec 13 01:11:39.316385 dbus-daemon[1431]: [system] SELinux support is enabled Dec 13 01:11:39.316213 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:11:39.329145 extend-filesystems[1433]: Resized partition /dev/vda9 Dec 13 01:11:39.329773 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:11:39.333048 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:11:39.333539 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:11:39.333963 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:11:39.342128 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:11:39.342159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1384) Dec 13 01:11:39.340984 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:11:39.351071 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:11:39.354821 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:11:39.358467 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:11:39.364499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:11:39.364712 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:11:39.365069 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:11:39.365291 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:11:39.370709 jq[1454]: true Dec 13 01:11:39.373671 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:11:39.373943 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:11:39.375610 update_engine[1453]: I20241213 01:11:39.375495 1453 main.cc:92] Flatcar Update Engine starting Dec 13 01:11:39.378271 update_engine[1453]: I20241213 01:11:39.377728 1453 update_check_scheduler.cc:74] Next update check in 4m14s Dec 13 01:11:39.385957 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:11:39.385651 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:11:39.407869 jq[1458]: true Dec 13 01:11:39.410293 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:11:39.410293 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:11:39.410293 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:11:39.419122 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Dec 13 01:11:39.412912 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:11:39.412939 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:11:39.415026 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:11:39.415282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:11:39.416676 systemd-logind[1448]: New seat seat0. Dec 13 01:11:39.423776 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:11:39.433162 tar[1457]: linux-amd64/helm Dec 13 01:11:39.438848 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:11:39.441413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:11:39.443584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:11:39.443717 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:11:39.447809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:11:39.447925 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:11:39.456292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:11:39.472061 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:11:39.473847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:11:39.475901 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:11:39.491606 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:11:39.595157 containerd[1459]: time="2024-12-13T01:11:39.595051926Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:11:39.616885 containerd[1459]: time="2024-12-13T01:11:39.616855235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.618670 containerd[1459]: time="2024-12-13T01:11:39.618644450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:11:39.618670 containerd[1459]: time="2024-12-13T01:11:39.618669547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:11:39.618723 containerd[1459]: time="2024-12-13T01:11:39.618683584Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:11:39.618895 containerd[1459]: time="2024-12-13T01:11:39.618873149Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:11:39.618895 containerd[1459]: time="2024-12-13T01:11:39.618891784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.618969 containerd[1459]: time="2024-12-13T01:11:39.618953871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:11:39.618997 containerd[1459]: time="2024-12-13T01:11:39.618968198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619183 containerd[1459]: time="2024-12-13T01:11:39.619165167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619183 containerd[1459]: time="2024-12-13T01:11:39.619182069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619236 containerd[1459]: time="2024-12-13T01:11:39.619194372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619236 containerd[1459]: time="2024-12-13T01:11:39.619203609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619306 containerd[1459]: time="2024-12-13T01:11:39.619292746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619533 containerd[1459]: time="2024-12-13T01:11:39.619509843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619648 containerd[1459]: time="2024-12-13T01:11:39.619625911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:11:39.619648 containerd[1459]: time="2024-12-13T01:11:39.619640308Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:11:39.619743 containerd[1459]: time="2024-12-13T01:11:39.619729696Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:11:39.619796 containerd[1459]: time="2024-12-13T01:11:39.619783767Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:11:39.639467 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:11:39.662238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:11:39.681295 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:11:39.683538 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:46970.service - OpenSSH per-connection server daemon (10.0.0.1:46970). Dec 13 01:11:39.687927 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:11:39.688150 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:11:39.691617 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:11:39.730396 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:11:39.744357 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:11:39.746837 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:11:39.748162 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:11:39.763946 sshd[1512]: Accepted publickey for core from 10.0.0.1 port 46970 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:39.765984 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:39.773841 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:11:39.781316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:11:39.784388 systemd-logind[1448]: New session 1 of user core. Dec 13 01:11:39.801070 tar[1457]: linux-amd64/LICENSE Dec 13 01:11:39.801171 tar[1457]: linux-amd64/README.md Dec 13 01:11:39.815465 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:11:39.819209 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:11:39.832336 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:11:39.836058 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:11:39.857102 containerd[1459]: time="2024-12-13T01:11:39.857060115Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:11:39.857157 containerd[1459]: time="2024-12-13T01:11:39.857146477Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:11:39.857180 containerd[1459]: time="2024-12-13T01:11:39.857165052Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:11:39.857199 containerd[1459]: time="2024-12-13T01:11:39.857181222Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:11:39.857220 containerd[1459]: time="2024-12-13T01:11:39.857199777Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:11:39.857417 containerd[1459]: time="2024-12-13T01:11:39.857385786Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:11:39.857638 containerd[1459]: time="2024-12-13T01:11:39.857618322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:11:39.857750 containerd[1459]: time="2024-12-13T01:11:39.857731003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:11:39.857788 containerd[1459]: time="2024-12-13T01:11:39.857749999Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:11:39.857788 containerd[1459]: time="2024-12-13T01:11:39.857764496Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:11:39.857788 containerd[1459]: time="2024-12-13T01:11:39.857777801Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857840 containerd[1459]: time="2024-12-13T01:11:39.857789603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857840 containerd[1459]: time="2024-12-13T01:11:39.857802116Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857840 containerd[1459]: time="2024-12-13T01:11:39.857815441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857840 containerd[1459]: time="2024-12-13T01:11:39.857829388Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857915 containerd[1459]: time="2024-12-13T01:11:39.857842161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857915 containerd[1459]: time="2024-12-13T01:11:39.857860275Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857915 containerd[1459]: time="2024-12-13T01:11:39.857873500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:11:39.857915 containerd[1459]: time="2024-12-13T01:11:39.857892386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.857915 containerd[1459]: time="2024-12-13T01:11:39.857905310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857921961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857934835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857945996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857958790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857969951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.857982495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858012 containerd[1459]: time="2024-12-13T01:11:39.858003464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858017049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858035314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858046925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858059198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858073766Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858104523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858125152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858163 containerd[1459]: time="2024-12-13T01:11:39.858136013Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:11:39.858868 containerd[1459]: time="2024-12-13T01:11:39.858823993Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858878255Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858893824Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858906678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858917127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858931544Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858950079Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:11:39.858983 containerd[1459]: time="2024-12-13T01:11:39.858960479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:11:39.859361 containerd[1459]: time="2024-12-13T01:11:39.859300156Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:11:39.859482 containerd[1459]: time="2024-12-13T01:11:39.859360138Z" level=info msg="Connect containerd service" Dec 13 01:11:39.859482 containerd[1459]: time="2024-12-13T01:11:39.859414891Z" level=info msg="using legacy CRI server" Dec 13 01:11:39.859482 containerd[1459]: time="2024-12-13T01:11:39.859422395Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:11:39.859597 containerd[1459]: time="2024-12-13T01:11:39.859513356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:11:39.860145 containerd[1459]: time="2024-12-13T01:11:39.860122267Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:11:39.860326 containerd[1459]: time="2024-12-13T01:11:39.860282258Z" level=info msg="Start subscribing containerd event" Dec 13 01:11:39.860352 containerd[1459]: time="2024-12-13T01:11:39.860335798Z" level=info msg="Start recovering state" Dec 13 01:11:39.860489 containerd[1459]: time="2024-12-13T01:11:39.860466423Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:11:39.860553 containerd[1459]: time="2024-12-13T01:11:39.860536194Z" level=info msg="Start event monitor" Dec 13 01:11:39.860576 containerd[1459]: time="2024-12-13T01:11:39.860554348Z" level=info msg="Start snapshots syncer" Dec 13 01:11:39.860576 containerd[1459]: time="2024-12-13T01:11:39.860565699Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:11:39.860576 containerd[1459]: time="2024-12-13T01:11:39.860575117Z" level=info msg="Start streaming server" Dec 13 01:11:39.860629 containerd[1459]: time="2024-12-13T01:11:39.860592810Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:11:39.861168 containerd[1459]: time="2024-12-13T01:11:39.860665847Z" level=info msg="containerd successfully booted in 0.267068s" Dec 13 01:11:39.860728 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:11:39.941619 systemd[1526]: Queued start job for default target default.target. Dec 13 01:11:39.952542 systemd[1526]: Created slice app.slice - User Application Slice. Dec 13 01:11:39.952572 systemd[1526]: Reached target paths.target - Paths. Dec 13 01:11:39.952585 systemd[1526]: Reached target timers.target - Timers. Dec 13 01:11:39.954262 systemd[1526]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:11:39.966271 systemd[1526]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:11:39.966447 systemd[1526]: Reached target sockets.target - Sockets. Dec 13 01:11:39.966470 systemd[1526]: Reached target basic.target - Basic System. Dec 13 01:11:39.966525 systemd[1526]: Reached target default.target - Main User Target. Dec 13 01:11:39.966564 systemd[1526]: Startup finished in 124ms. Dec 13 01:11:39.966724 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:11:39.969584 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:11:40.033846 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:55702.service - OpenSSH per-connection server daemon (10.0.0.1:55702). Dec 13 01:11:40.075793 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 55702 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:40.077427 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:40.081688 systemd-logind[1448]: New session 2 of user core. Dec 13 01:11:40.091254 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:11:40.145920 sshd[1538]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:40.158055 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:55702.service: Deactivated successfully. Dec 13 01:11:40.159918 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:11:40.161204 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:11:40.170391 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:55710.service - OpenSSH per-connection server daemon (10.0.0.1:55710). Dec 13 01:11:40.172750 systemd-logind[1448]: Removed session 2. Dec 13 01:11:40.191199 systemd-networkd[1396]: eth0: Gained IPv6LL Dec 13 01:11:40.194431 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:11:40.196447 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:11:40.204902 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 55710 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:40.206384 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:11:40.206526 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:40.209041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:11:40.211292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:11:40.226424 systemd-logind[1448]: New session 3 of user core. Dec 13 01:11:40.226890 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:11:40.228827 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:11:40.229076 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:11:40.232027 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:11:40.237318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:11:40.287958 sshd[1545]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:40.292287 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:55710.service: Deactivated successfully. Dec 13 01:11:40.294282 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:11:40.294900 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:11:40.295829 systemd-logind[1448]: Removed session 3. Dec 13 01:11:41.408205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:11:41.409799 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:11:41.410947 systemd[1]: Startup finished in 739ms (kernel) + 6.484s (initrd) + 4.421s (userspace) = 11.644s. Dec 13 01:11:41.423111 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:11:42.070827 kubelet[1573]: E1213 01:11:42.070750 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:11:42.075479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:11:42.075691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:11:42.076055 systemd[1]: kubelet.service: Consumed 1.760s CPU time. Dec 13 01:11:50.303244 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:48212.service - OpenSSH per-connection server daemon (10.0.0.1:48212). Dec 13 01:11:50.341578 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 48212 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:50.343483 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:50.347821 systemd-logind[1448]: New session 4 of user core. Dec 13 01:11:50.361313 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:11:50.416331 sshd[1587]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:50.426315 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:48212.service: Deactivated successfully. Dec 13 01:11:50.428260 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:11:50.429861 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:11:50.431320 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Dec 13 01:11:50.432106 systemd-logind[1448]: Removed session 4. Dec 13 01:11:50.473057 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:50.475019 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:50.479337 systemd-logind[1448]: New session 5 of user core. Dec 13 01:11:50.489347 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:11:50.540113 sshd[1594]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:50.558457 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:48216.service: Deactivated successfully. Dec 13 01:11:50.560676 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:11:50.562568 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:11:50.571516 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:48224.service - OpenSSH per-connection server daemon (10.0.0.1:48224). Dec 13 01:11:50.572715 systemd-logind[1448]: Removed session 5. Dec 13 01:11:50.605532 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 48224 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:50.607280 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:50.611544 systemd-logind[1448]: New session 6 of user core. Dec 13 01:11:50.621346 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:11:50.675562 sshd[1601]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:50.686644 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:48224.service: Deactivated successfully. Dec 13 01:11:50.688994 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:11:50.691134 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:11:50.704512 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:48230.service - OpenSSH per-connection server daemon (10.0.0.1:48230). Dec 13 01:11:50.705745 systemd-logind[1448]: Removed session 6. Dec 13 01:11:50.739422 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 48230 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:50.741188 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:50.745597 systemd-logind[1448]: New session 7 of user core. Dec 13 01:11:50.755344 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:11:50.821108 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:11:50.821463 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:11:50.843124 sudo[1611]: pam_unix(sudo:session): session closed for user root Dec 13 01:11:50.845380 sshd[1608]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:50.870067 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:48230.service: Deactivated successfully. Dec 13 01:11:50.871816 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:11:50.873476 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:11:50.874815 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:48238.service - OpenSSH per-connection server daemon (10.0.0.1:48238). Dec 13 01:11:50.875564 systemd-logind[1448]: Removed session 7. Dec 13 01:11:50.916019 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 48238 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:50.917524 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:50.921826 systemd-logind[1448]: New session 8 of user core. Dec 13 01:11:50.936213 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:11:50.991223 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:11:50.991556 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:11:50.995116 sudo[1620]: pam_unix(sudo:session): session closed for user root Dec 13 01:11:51.001235 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:11:51.001567 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:11:51.021307 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:11:51.022867 auditctl[1623]: No rules Dec 13 01:11:51.023402 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:11:51.023691 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:11:51.026837 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:11:51.054436 augenrules[1641]: No rules Dec 13 01:11:51.056315 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:11:51.057678 sudo[1619]: pam_unix(sudo:session): session closed for user root Dec 13 01:11:51.059476 sshd[1616]: pam_unix(sshd:session): session closed for user core Dec 13 01:11:51.066742 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:48238.service: Deactivated successfully. Dec 13 01:11:51.068411 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:11:51.069682 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:11:51.082310 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:48246.service - OpenSSH per-connection server daemon (10.0.0.1:48246). Dec 13 01:11:51.083298 systemd-logind[1448]: Removed session 8. Dec 13 01:11:51.115697 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 48246 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:11:51.117456 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:11:51.121352 systemd-logind[1448]: New session 9 of user core. Dec 13 01:11:51.131190 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:11:51.185654 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:11:51.186066 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:11:51.655305 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:11:51.655441 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:11:52.245076 dockerd[1670]: time="2024-12-13T01:11:52.244992440Z" level=info msg="Starting up" Dec 13 01:11:52.247358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:11:52.254998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:11:52.468006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:11:52.473486 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:11:52.859146 kubelet[1702]: E1213 01:11:52.859067 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:11:52.866298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:11:52.866509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:11:52.878355 dockerd[1670]: time="2024-12-13T01:11:52.878298715Z" level=info msg="Loading containers: start." Dec 13 01:11:53.139125 kernel: Initializing XFRM netlink socket Dec 13 01:11:53.218053 systemd-networkd[1396]: docker0: Link UP Dec 13 01:11:53.238622 dockerd[1670]: time="2024-12-13T01:11:53.238570173Z" level=info msg="Loading containers: done." Dec 13 01:11:53.252058 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4021719785-merged.mount: Deactivated successfully. Dec 13 01:11:53.256566 dockerd[1670]: time="2024-12-13T01:11:53.256520316Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:11:53.256856 dockerd[1670]: time="2024-12-13T01:11:53.256653285Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:11:53.256856 dockerd[1670]: time="2024-12-13T01:11:53.256767539Z" level=info msg="Daemon has completed initialization" Dec 13 01:11:53.301039 dockerd[1670]: time="2024-12-13T01:11:53.300953103Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:11:53.301215 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:11:54.127275 containerd[1459]: time="2024-12-13T01:11:54.127234906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:11:54.738868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683180736.mount: Deactivated successfully. Dec 13 01:11:55.738153 containerd[1459]: time="2024-12-13T01:11:55.738076790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:55.738893 containerd[1459]: time="2024-12-13T01:11:55.738824602Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:11:55.740402 containerd[1459]: time="2024-12-13T01:11:55.740363448Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:55.743430 containerd[1459]: time="2024-12-13T01:11:55.743387479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:55.744394 containerd[1459]: time="2024-12-13T01:11:55.744340787Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.617048012s" Dec 13 01:11:55.744451 containerd[1459]: time="2024-12-13T01:11:55.744398996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:11:55.769574 containerd[1459]: time="2024-12-13T01:11:55.769531158Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:11:57.293872 containerd[1459]: time="2024-12-13T01:11:57.293803363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:57.294648 containerd[1459]: time="2024-12-13T01:11:57.294605978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:11:57.295723 containerd[1459]: time="2024-12-13T01:11:57.295683729Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:57.298361 containerd[1459]: time="2024-12-13T01:11:57.298307379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:57.299600 containerd[1459]: time="2024-12-13T01:11:57.299557133Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.52997537s" Dec 13 01:11:57.299647 containerd[1459]: time="2024-12-13T01:11:57.299599302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:11:57.323488 containerd[1459]: time="2024-12-13T01:11:57.323452585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:11:58.309315 containerd[1459]: time="2024-12-13T01:11:58.309252893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:58.310162 containerd[1459]: time="2024-12-13T01:11:58.310080335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:11:58.311408 containerd[1459]: time="2024-12-13T01:11:58.311364032Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:58.314181 containerd[1459]: time="2024-12-13T01:11:58.314131863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:11:58.315335 containerd[1459]: time="2024-12-13T01:11:58.315303921Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 991.821269ms" Dec 13 01:11:58.315386 containerd[1459]: time="2024-12-13T01:11:58.315335149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:11:58.337514 containerd[1459]: time="2024-12-13T01:11:58.337483195Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:11:59.980696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259479316.mount: Deactivated successfully. Dec 13 01:12:00.598258 containerd[1459]: time="2024-12-13T01:12:00.598205496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:00.599175 containerd[1459]: time="2024-12-13T01:12:00.599139017Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:12:00.600619 containerd[1459]: time="2024-12-13T01:12:00.600588956Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:00.602689 containerd[1459]: time="2024-12-13T01:12:00.602616739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:00.603473 containerd[1459]: time="2024-12-13T01:12:00.603427199Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.265911834s" Dec 13 01:12:00.603473 containerd[1459]: time="2024-12-13T01:12:00.603461773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:12:00.626734 containerd[1459]: time="2024-12-13T01:12:00.626260890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:12:01.223010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862744384.mount: Deactivated successfully. Dec 13 01:12:02.564272 containerd[1459]: time="2024-12-13T01:12:02.564219464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:02.589246 containerd[1459]: time="2024-12-13T01:12:02.589159305Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:12:02.596741 containerd[1459]: time="2024-12-13T01:12:02.596681121Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:02.642790 containerd[1459]: time="2024-12-13T01:12:02.642753503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:02.643861 containerd[1459]: time="2024-12-13T01:12:02.643803222Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.017505393s" Dec 13 01:12:02.643861 containerd[1459]: time="2024-12-13T01:12:02.643853496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:12:02.667862 containerd[1459]: time="2024-12-13T01:12:02.667756142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:12:02.910933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:12:02.925395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:03.096957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:03.102228 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:12:03.192339 kubelet[1996]: E1213 01:12:03.192173 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:12:03.196510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:12:03.196758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:12:04.146836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244989065.mount: Deactivated successfully. Dec 13 01:12:04.310696 containerd[1459]: time="2024-12-13T01:12:04.310622713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:04.317230 containerd[1459]: time="2024-12-13T01:12:04.317174420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:12:04.326955 containerd[1459]: time="2024-12-13T01:12:04.326906692Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:04.341007 containerd[1459]: time="2024-12-13T01:12:04.340936713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:04.342009 containerd[1459]: time="2024-12-13T01:12:04.341957177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.674153064s" Dec 13 01:12:04.342079 containerd[1459]: time="2024-12-13T01:12:04.342006119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:12:04.364594 containerd[1459]: time="2024-12-13T01:12:04.364557100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:12:04.907221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992209829.mount: Deactivated successfully. Dec 13 01:12:07.761395 containerd[1459]: time="2024-12-13T01:12:07.761308200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:07.762123 containerd[1459]: time="2024-12-13T01:12:07.762042908Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:12:07.763425 containerd[1459]: time="2024-12-13T01:12:07.763389223Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:07.766579 containerd[1459]: time="2024-12-13T01:12:07.766545633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:07.767704 containerd[1459]: time="2024-12-13T01:12:07.767657418Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.40306393s" Dec 13 01:12:07.767704 containerd[1459]: time="2024-12-13T01:12:07.767703103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:12:10.467766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:10.479296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:10.495234 systemd[1]: Reloading requested from client PID 2144 ('systemctl') (unit session-9.scope)... Dec 13 01:12:10.495249 systemd[1]: Reloading... Dec 13 01:12:10.588124 zram_generator::config[2183]: No configuration found. Dec 13 01:12:11.269354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:12:11.346945 systemd[1]: Reloading finished in 851 ms. Dec 13 01:12:11.414262 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:11.419295 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:12:11.419562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:11.433534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:11.582119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:11.588259 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:12:11.631247 kubelet[2233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:12:11.631247 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:12:11.631247 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:12:11.632236 kubelet[2233]: I1213 01:12:11.632187 2233 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:12:11.957917 kubelet[2233]: I1213 01:12:11.957863 2233 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:12:11.957917 kubelet[2233]: I1213 01:12:11.957897 2233 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:12:11.958165 kubelet[2233]: I1213 01:12:11.958141 2233 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:12:11.976385 kubelet[2233]: I1213 01:12:11.976315 2233 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:12:11.977199 kubelet[2233]: E1213 01:12:11.977172 2233 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:11.989118 kubelet[2233]: I1213 01:12:11.989081 2233 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:12:11.991116 kubelet[2233]: I1213 01:12:11.991067 2233 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:12:11.991302 kubelet[2233]: I1213 01:12:11.991115 2233 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:12:11.991802 kubelet[2233]: I1213 01:12:11.991776 2233 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:12:11.991802 kubelet[2233]: I1213 01:12:11.991792 2233 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:12:11.991985 kubelet[2233]: I1213 01:12:11.991959 2233 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:12:11.992739 kubelet[2233]: I1213 01:12:11.992712 2233 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:12:11.992739 kubelet[2233]: I1213 01:12:11.992733 2233 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:12:11.992813 kubelet[2233]: I1213 01:12:11.992762 2233 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:12:11.992813 kubelet[2233]: I1213 01:12:11.992783 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:12:11.994167 kubelet[2233]: W1213 01:12:11.994032 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:11.994167 kubelet[2233]: E1213 01:12:11.994128 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:11.995338 kubelet[2233]: W1213 01:12:11.995303 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:11.995385 kubelet[2233]: E1213 01:12:11.995344 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:11.997534 kubelet[2233]: I1213 01:12:11.997514 2233 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:12:11.998840 kubelet[2233]: I1213 01:12:11.998815 2233 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:12:11.998924 kubelet[2233]: W1213 01:12:11.998906 2233 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:12:11.999730 kubelet[2233]: I1213 01:12:11.999716 2233 server.go:1264] "Started kubelet" Dec 13 01:12:11.999844 kubelet[2233]: I1213 01:12:11.999814 2233 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:12:12.000431 kubelet[2233]: I1213 01:12:12.000036 2233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:12:12.002861 kubelet[2233]: I1213 01:12:12.002839 2233 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:12:12.039827 kubelet[2233]: E1213 01:12:12.009770 2233 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:12:12.039827 kubelet[2233]: E1213 01:12:12.014241 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810976896c75d66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:12:11.999681894 +0000 UTC m=+0.406443407,LastTimestamp:2024-12-13 01:12:11.999681894 +0000 UTC m=+0.406443407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:12:12.074679 kubelet[2233]: I1213 01:12:12.074606 2233 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:12:12.076785 kubelet[2233]: I1213 01:12:12.074828 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:12:12.076785 kubelet[2233]: I1213 01:12:12.075106 2233 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:12:12.076966 kubelet[2233]: I1213 01:12:12.076941 2233 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:12:12.077038 kubelet[2233]: I1213 01:12:12.077014 2233 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:12:12.077419 kubelet[2233]: E1213 01:12:12.077386 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:12:12.078405 kubelet[2233]: W1213 01:12:12.078197 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:12.078462 kubelet[2233]: E1213 01:12:12.078434 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:12.079017 kubelet[2233]: E1213 01:12:12.078574 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Dec 13 01:12:12.079347 kubelet[2233]: I1213 01:12:12.079313 2233 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:12:12.079440 kubelet[2233]: I1213 01:12:12.079417 2233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:12:12.080837 kubelet[2233]: I1213 01:12:12.080735 2233 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:12:12.095620 kubelet[2233]: I1213 01:12:12.095288 2233 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:12:12.095620 kubelet[2233]: I1213 01:12:12.095306 2233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:12:12.095620 kubelet[2233]: I1213 01:12:12.095330 2233 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:12:12.095620 kubelet[2233]: I1213 01:12:12.095445 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:12:12.097285 kubelet[2233]: I1213 01:12:12.097263 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:12:12.097874 kubelet[2233]: I1213 01:12:12.097380 2233 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:12:12.097874 kubelet[2233]: I1213 01:12:12.097411 2233 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:12:12.097874 kubelet[2233]: E1213 01:12:12.097451 2233 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:12:12.099084 kubelet[2233]: W1213 01:12:12.099016 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:12.099152 kubelet[2233]: E1213 01:12:12.099133 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:12.179811 kubelet[2233]: I1213 01:12:12.179768 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:12.180284 kubelet[2233]: E1213 01:12:12.180234 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 13 01:12:12.198403 kubelet[2233]: E1213 01:12:12.198335 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:12:12.280548 kubelet[2233]: E1213 01:12:12.280387 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Dec 13 01:12:12.382203 kubelet[2233]: I1213 01:12:12.382162 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:12.382533 kubelet[2233]: E1213 01:12:12.382498 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 13 01:12:12.398751 kubelet[2233]: E1213 01:12:12.398695 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:12:12.682050 kubelet[2233]: E1213 01:12:12.681987 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Dec 13 01:12:12.744454 kubelet[2233]: I1213 01:12:12.744394 2233 policy_none.go:49] "None policy: Start" Dec 13 01:12:12.745468 kubelet[2233]: I1213 01:12:12.745432 2233 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:12:12.745468 kubelet[2233]: I1213 01:12:12.745475 2233 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:12:12.784483 kubelet[2233]: I1213 01:12:12.784430 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:12.784923 kubelet[2233]: E1213 01:12:12.784882 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 13 01:12:12.799050 kubelet[2233]: E1213 01:12:12.798994 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:12:12.818156 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:12:12.830779 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:12:12.833981 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:12:12.843272 kubelet[2233]: I1213 01:12:12.843229 2233 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:12:12.843698 kubelet[2233]: I1213 01:12:12.843532 2233 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:12:12.843764 kubelet[2233]: I1213 01:12:12.843715 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:12:12.848706 kubelet[2233]: E1213 01:12:12.848675 2233 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:12:12.968252 kubelet[2233]: W1213 01:12:12.968065 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:12.968252 kubelet[2233]: E1213 01:12:12.968166 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.040111 kubelet[2233]: W1213 01:12:13.040003 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.040111 kubelet[2233]: E1213 01:12:13.040085 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.198529 kubelet[2233]: W1213 01:12:13.198471 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.198529 kubelet[2233]: E1213 01:12:13.198515 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.483491 kubelet[2233]: E1213 01:12:13.483419 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" Dec 13 01:12:13.495275 kubelet[2233]: W1213 01:12:13.495173 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.495275 kubelet[2233]: E1213 01:12:13.495271 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:13.587086 kubelet[2233]: I1213 01:12:13.587032 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:13.587500 kubelet[2233]: E1213 01:12:13.587451 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 13 01:12:13.599658 kubelet[2233]: I1213 01:12:13.599602 2233 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:12:13.601013 kubelet[2233]: I1213 01:12:13.600960 2233 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:12:13.602158 kubelet[2233]: I1213 01:12:13.602123 2233 topology_manager.go:215] "Topology Admit Handler" podUID="da22929ce233d96940740db6cff5fac1" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:12:13.612612 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 01:12:13.613385 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 01:12:13.615688 systemd[1]: Created slice kubepods-burstable-podda22929ce233d96940740db6cff5fac1.slice - libcontainer container kubepods-burstable-podda22929ce233d96940740db6cff5fac1.slice. Dec 13 01:12:13.686063 kubelet[2233]: I1213 01:12:13.686007 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:13.686063 kubelet[2233]: I1213 01:12:13.686054 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:13.686520 kubelet[2233]: I1213 01:12:13.686085 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:13.686520 kubelet[2233]: I1213 01:12:13.686147 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:13.686520 kubelet[2233]: I1213 01:12:13.686173 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:13.686520 kubelet[2233]: I1213 01:12:13.686201 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:13.686520 kubelet[2233]: I1213 01:12:13.686226 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:13.686660 kubelet[2233]: I1213 01:12:13.686242 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:12:13.686660 kubelet[2233]: I1213 01:12:13.686255 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:13.933489 kubelet[2233]: E1213 01:12:13.933441 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:13.933623 kubelet[2233]: E1213 01:12:13.933510 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:13.933849 kubelet[2233]: E1213 01:12:13.933820 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:13.934334 containerd[1459]: time="2024-12-13T01:12:13.934275385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:da22929ce233d96940740db6cff5fac1,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:13.934799 containerd[1459]: time="2024-12-13T01:12:13.934369813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:13.934799 containerd[1459]: time="2024-12-13T01:12:13.934390131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:14.174259 kubelet[2233]: E1213 01:12:14.174206 2233 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:14.616268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447648281.mount: Deactivated successfully. Dec 13 01:12:14.624981 containerd[1459]: time="2024-12-13T01:12:14.624929764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:12:14.625830 containerd[1459]: time="2024-12-13T01:12:14.625789059Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:12:14.626774 containerd[1459]: time="2024-12-13T01:12:14.626712414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:12:14.627673 containerd[1459]: time="2024-12-13T01:12:14.627639347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:12:14.628395 containerd[1459]: time="2024-12-13T01:12:14.628353827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:12:14.629219 containerd[1459]: time="2024-12-13T01:12:14.629191782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:12:14.630177 containerd[1459]: time="2024-12-13T01:12:14.630125076Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:12:14.634189 containerd[1459]: time="2024-12-13T01:12:14.634137271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:12:14.635000 containerd[1459]: time="2024-12-13T01:12:14.634962361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 700.500084ms" Dec 13 01:12:14.636176 containerd[1459]: time="2024-12-13T01:12:14.636143794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.787116ms" Dec 13 01:12:14.637431 containerd[1459]: time="2024-12-13T01:12:14.637399548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.797536ms" Dec 13 01:12:14.934364 containerd[1459]: time="2024-12-13T01:12:14.934229987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:14.934364 containerd[1459]: time="2024-12-13T01:12:14.934293948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:14.934364 containerd[1459]: time="2024-12-13T01:12:14.934319566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.935066 containerd[1459]: time="2024-12-13T01:12:14.934413855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.935988 containerd[1459]: time="2024-12-13T01:12:14.935746123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:14.935988 containerd[1459]: time="2024-12-13T01:12:14.935838076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:14.935988 containerd[1459]: time="2024-12-13T01:12:14.935854929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.936241 containerd[1459]: time="2024-12-13T01:12:14.935948165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.938082 containerd[1459]: time="2024-12-13T01:12:14.937168021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:14.938082 containerd[1459]: time="2024-12-13T01:12:14.937243694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:14.938082 containerd[1459]: time="2024-12-13T01:12:14.937262920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.938082 containerd[1459]: time="2024-12-13T01:12:14.937363951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:14.962321 systemd[1]: Started cri-containerd-02c011a69852086a8d36350acafd118b29335fc6070491a738dcf3f4cff51333.scope - libcontainer container 02c011a69852086a8d36350acafd118b29335fc6070491a738dcf3f4cff51333. Dec 13 01:12:14.966780 systemd[1]: Started cri-containerd-7f880396a23d00540f9da59fd2d5b12fc52a4d7a0eeac700591abfbac752bcd7.scope - libcontainer container 7f880396a23d00540f9da59fd2d5b12fc52a4d7a0eeac700591abfbac752bcd7. Dec 13 01:12:14.969995 systemd[1]: Started cri-containerd-7d1c01788402e9c16b17ec993c0434ff8ddf8752ec2c1d30a2646fd000d4ab14.scope - libcontainer container 7d1c01788402e9c16b17ec993c0434ff8ddf8752ec2c1d30a2646fd000d4ab14. Dec 13 01:12:15.045523 containerd[1459]: time="2024-12-13T01:12:15.045484815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c011a69852086a8d36350acafd118b29335fc6070491a738dcf3f4cff51333\"" Dec 13 01:12:15.047544 kubelet[2233]: E1213 01:12:15.047502 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:15.050763 containerd[1459]: time="2024-12-13T01:12:15.050580765Z" level=info msg="CreateContainer within sandbox \"02c011a69852086a8d36350acafd118b29335fc6070491a738dcf3f4cff51333\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:12:15.051605 containerd[1459]: time="2024-12-13T01:12:15.051441691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d1c01788402e9c16b17ec993c0434ff8ddf8752ec2c1d30a2646fd000d4ab14\"" Dec 13 01:12:15.052154 kubelet[2233]: E1213 01:12:15.052126 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:15.054563 containerd[1459]: time="2024-12-13T01:12:15.054527292Z" level=info msg="CreateContainer within sandbox \"7d1c01788402e9c16b17ec993c0434ff8ddf8752ec2c1d30a2646fd000d4ab14\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:12:15.056196 containerd[1459]: time="2024-12-13T01:12:15.056132846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:da22929ce233d96940740db6cff5fac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f880396a23d00540f9da59fd2d5b12fc52a4d7a0eeac700591abfbac752bcd7\"" Dec 13 01:12:15.056944 kubelet[2233]: E1213 01:12:15.056875 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:15.059427 containerd[1459]: time="2024-12-13T01:12:15.059400831Z" level=info msg="CreateContainer within sandbox \"7f880396a23d00540f9da59fd2d5b12fc52a4d7a0eeac700591abfbac752bcd7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:12:15.074512 containerd[1459]: time="2024-12-13T01:12:15.074457021Z" level=info msg="CreateContainer within sandbox \"02c011a69852086a8d36350acafd118b29335fc6070491a738dcf3f4cff51333\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43e68db6a1f9f7316a032ef6d0c997f01ed9a120cea9d87c15825c180e8b3a92\"" Dec 13 01:12:15.075026 containerd[1459]: time="2024-12-13T01:12:15.074991381Z" level=info msg="StartContainer for \"43e68db6a1f9f7316a032ef6d0c997f01ed9a120cea9d87c15825c180e8b3a92\"" Dec 13 01:12:15.080227 containerd[1459]: time="2024-12-13T01:12:15.080181929Z" level=info msg="CreateContainer within sandbox \"7d1c01788402e9c16b17ec993c0434ff8ddf8752ec2c1d30a2646fd000d4ab14\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed14ce794847f695bb9b79cc515e2cfcc7fb16e1a6770d060a7055c5f47f013d\"" Dec 13 01:12:15.080774 containerd[1459]: time="2024-12-13T01:12:15.080746766Z" level=info msg="StartContainer for \"ed14ce794847f695bb9b79cc515e2cfcc7fb16e1a6770d060a7055c5f47f013d\"" Dec 13 01:12:15.084677 kubelet[2233]: E1213 01:12:15.084623 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="3.2s" Dec 13 01:12:15.085129 containerd[1459]: time="2024-12-13T01:12:15.084987920Z" level=info msg="CreateContainer within sandbox \"7f880396a23d00540f9da59fd2d5b12fc52a4d7a0eeac700591abfbac752bcd7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38ff699d6d7dc3137ed0a4ef045c2c980c2fd5875063edd7e9a5564a1b22ad42\"" Dec 13 01:12:15.085440 containerd[1459]: time="2024-12-13T01:12:15.085408635Z" level=info msg="StartContainer for \"38ff699d6d7dc3137ed0a4ef045c2c980c2fd5875063edd7e9a5564a1b22ad42\"" Dec 13 01:12:15.107324 systemd[1]: Started cri-containerd-43e68db6a1f9f7316a032ef6d0c997f01ed9a120cea9d87c15825c180e8b3a92.scope - libcontainer container 43e68db6a1f9f7316a032ef6d0c997f01ed9a120cea9d87c15825c180e8b3a92. Dec 13 01:12:15.111619 systemd[1]: Started cri-containerd-ed14ce794847f695bb9b79cc515e2cfcc7fb16e1a6770d060a7055c5f47f013d.scope - libcontainer container ed14ce794847f695bb9b79cc515e2cfcc7fb16e1a6770d060a7055c5f47f013d. Dec 13 01:12:15.125355 systemd[1]: Started cri-containerd-38ff699d6d7dc3137ed0a4ef045c2c980c2fd5875063edd7e9a5564a1b22ad42.scope - libcontainer container 38ff699d6d7dc3137ed0a4ef045c2c980c2fd5875063edd7e9a5564a1b22ad42. Dec 13 01:12:15.190012 kubelet[2233]: I1213 01:12:15.189895 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:15.192297 kubelet[2233]: E1213 01:12:15.192252 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 13 01:12:15.200072 containerd[1459]: time="2024-12-13T01:12:15.200028889Z" level=info msg="StartContainer for \"ed14ce794847f695bb9b79cc515e2cfcc7fb16e1a6770d060a7055c5f47f013d\" returns successfully" Dec 13 01:12:15.209243 containerd[1459]: time="2024-12-13T01:12:15.208938916Z" level=info msg="StartContainer for \"43e68db6a1f9f7316a032ef6d0c997f01ed9a120cea9d87c15825c180e8b3a92\" returns successfully" Dec 13 01:12:15.223412 containerd[1459]: time="2024-12-13T01:12:15.223349426Z" level=info msg="StartContainer for \"38ff699d6d7dc3137ed0a4ef045c2c980c2fd5875063edd7e9a5564a1b22ad42\" returns successfully" Dec 13 01:12:15.280574 kubelet[2233]: W1213 01:12:15.280494 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:15.280574 kubelet[2233]: E1213 01:12:15.280560 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 13 01:12:16.127469 kubelet[2233]: E1213 01:12:16.127427 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:16.129875 kubelet[2233]: E1213 01:12:16.129791 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:16.131312 kubelet[2233]: E1213 01:12:16.131290 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:17.000170 kubelet[2233]: I1213 01:12:17.000107 2233 apiserver.go:52] "Watching apiserver" Dec 13 01:12:17.077218 kubelet[2233]: I1213 01:12:17.077173 2233 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:12:17.133551 kubelet[2233]: E1213 01:12:17.133521 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:17.134054 kubelet[2233]: E1213 01:12:17.133630 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:17.134054 kubelet[2233]: E1213 01:12:17.133776 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:17.174538 kubelet[2233]: E1213 01:12:17.174491 2233 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:12:17.547174 kubelet[2233]: E1213 01:12:17.547125 2233 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:12:17.979352 kubelet[2233]: E1213 01:12:17.979316 2233 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:12:18.135326 kubelet[2233]: E1213 01:12:18.135287 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:18.288863 kubelet[2233]: E1213 01:12:18.288743 2233 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:12:18.394582 kubelet[2233]: I1213 01:12:18.394543 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:18.399633 kubelet[2233]: I1213 01:12:18.399596 2233 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:12:18.682833 systemd[1]: Reloading requested from client PID 2519 ('systemctl') (unit session-9.scope)... Dec 13 01:12:18.682850 systemd[1]: Reloading... Dec 13 01:12:18.768139 zram_generator::config[2561]: No configuration found. Dec 13 01:12:18.929923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:12:19.030265 systemd[1]: Reloading finished in 347 ms. Dec 13 01:12:19.076444 kubelet[2233]: I1213 01:12:19.076365 2233 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:12:19.076477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:19.093614 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:12:19.093926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:19.094006 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 117.8M memory peak, 0B memory swap peak. Dec 13 01:12:19.100687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:12:19.293902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:12:19.300496 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:12:19.360716 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:12:19.360716 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:12:19.360716 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:12:19.361247 kubelet[2603]: I1213 01:12:19.360751 2603 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:12:19.365330 kubelet[2603]: I1213 01:12:19.365285 2603 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:12:19.365330 kubelet[2603]: I1213 01:12:19.365311 2603 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:12:19.365513 kubelet[2603]: I1213 01:12:19.365492 2603 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:12:19.366676 kubelet[2603]: I1213 01:12:19.366641 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:12:19.367701 kubelet[2603]: I1213 01:12:19.367675 2603 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:12:19.376672 kubelet[2603]: I1213 01:12:19.376638 2603 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:12:19.376992 kubelet[2603]: I1213 01:12:19.376935 2603 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:12:19.377236 kubelet[2603]: I1213 01:12:19.376978 2603 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:12:19.377335 kubelet[2603]: I1213 01:12:19.377240 2603 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:12:19.377335 kubelet[2603]: I1213 01:12:19.377254 2603 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:12:19.377335 kubelet[2603]: I1213 01:12:19.377313 2603 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:12:19.377470 kubelet[2603]: I1213 01:12:19.377443 2603 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:12:19.377470 kubelet[2603]: I1213 01:12:19.377464 2603 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:12:19.377530 kubelet[2603]: I1213 01:12:19.377491 2603 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:12:19.377530 kubelet[2603]: I1213 01:12:19.377510 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:12:19.378457 kubelet[2603]: I1213 01:12:19.378350 2603 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:12:19.378668 kubelet[2603]: I1213 01:12:19.378616 2603 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:12:19.379148 kubelet[2603]: I1213 01:12:19.379121 2603 server.go:1264] "Started kubelet" Dec 13 01:12:19.380692 kubelet[2603]: I1213 01:12:19.380578 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:12:19.382984 kubelet[2603]: I1213 01:12:19.380896 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:12:19.386242 kubelet[2603]: I1213 01:12:19.383541 2603 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:12:19.386242 kubelet[2603]: I1213 01:12:19.383605 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:12:19.386242 kubelet[2603]: I1213 01:12:19.384704 2603 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:12:19.387920 kubelet[2603]: I1213 01:12:19.387887 2603 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:12:19.389455 kubelet[2603]: I1213 01:12:19.389422 2603 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:12:19.389514 kubelet[2603]: I1213 01:12:19.389478 2603 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:12:19.392404 kubelet[2603]: I1213 01:12:19.392362 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:12:19.393256 kubelet[2603]: I1213 01:12:19.393236 2603 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:12:19.393413 kubelet[2603]: I1213 01:12:19.393392 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:12:19.393740 kubelet[2603]: I1213 01:12:19.393715 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:12:19.393783 kubelet[2603]: I1213 01:12:19.393748 2603 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:12:19.393783 kubelet[2603]: I1213 01:12:19.393771 2603 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:12:19.393841 kubelet[2603]: E1213 01:12:19.393813 2603 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:12:19.396684 kubelet[2603]: I1213 01:12:19.396658 2603 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:12:19.399232 kubelet[2603]: E1213 01:12:19.399201 2603 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:12:19.434036 kubelet[2603]: I1213 01:12:19.433993 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:12:19.434036 kubelet[2603]: I1213 01:12:19.434014 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:12:19.434036 kubelet[2603]: I1213 01:12:19.434035 2603 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:12:19.434309 kubelet[2603]: I1213 01:12:19.434285 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:12:19.434338 kubelet[2603]: I1213 01:12:19.434310 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:12:19.434338 kubelet[2603]: I1213 01:12:19.434336 2603 policy_none.go:49] "None policy: Start" Dec 13 01:12:19.435073 kubelet[2603]: I1213 01:12:19.435045 2603 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:12:19.435123 kubelet[2603]: I1213 01:12:19.435107 2603 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:12:19.435370 kubelet[2603]: I1213 01:12:19.435342 2603 state_mem.go:75] "Updated machine memory state" Dec 13 01:12:19.441702 kubelet[2603]: I1213 01:12:19.441670 2603 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:12:19.441951 kubelet[2603]: I1213 01:12:19.441896 2603 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:12:19.442652 kubelet[2603]: I1213 01:12:19.442052 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:12:19.495188 kubelet[2603]: I1213 01:12:19.495064 2603 topology_manager.go:215] "Topology Admit Handler" podUID="da22929ce233d96940740db6cff5fac1" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:12:19.495188 kubelet[2603]: I1213 01:12:19.495197 2603 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:12:19.495407 kubelet[2603]: I1213 01:12:19.495268 2603 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:12:19.496066 kubelet[2603]: I1213 01:12:19.495862 2603 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:12:19.691174 kubelet[2603]: I1213 01:12:19.691124 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:19.691174 kubelet[2603]: I1213 01:12:19.691170 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:19.691408 kubelet[2603]: I1213 01:12:19.691190 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:19.691408 kubelet[2603]: I1213 01:12:19.691207 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:19.691408 kubelet[2603]: I1213 01:12:19.691231 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:19.691408 kubelet[2603]: I1213 01:12:19.691246 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:19.691408 kubelet[2603]: I1213 01:12:19.691263 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da22929ce233d96940740db6cff5fac1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"da22929ce233d96940740db6cff5fac1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:19.691527 kubelet[2603]: I1213 01:12:19.691327 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:19.691527 kubelet[2603]: I1213 01:12:19.691405 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:12:20.042263 kubelet[2603]: E1213 01:12:20.041500 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.042263 kubelet[2603]: E1213 01:12:20.041872 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.042604 kubelet[2603]: E1213 01:12:20.042477 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.112409 kubelet[2603]: I1213 01:12:20.112354 2603 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:12:20.112587 kubelet[2603]: I1213 01:12:20.112475 2603 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:12:20.379313 kubelet[2603]: I1213 01:12:20.379189 2603 apiserver.go:52] "Watching apiserver" Dec 13 01:12:20.390219 kubelet[2603]: I1213 01:12:20.390171 2603 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:12:20.411026 kubelet[2603]: E1213 01:12:20.410951 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.478617 kubelet[2603]: E1213 01:12:20.478405 2603 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:12:20.479371 kubelet[2603]: E1213 01:12:20.478859 2603 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:12:20.479371 kubelet[2603]: E1213 01:12:20.479288 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.480225 kubelet[2603]: E1213 01:12:20.479836 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:20.505252 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:12:20.505697 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:12:20.508116 kubelet[2603]: I1213 01:12:20.508048 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.50802654 podStartE2EDuration="1.50802654s" podCreationTimestamp="2024-12-13 01:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:12:20.507048396 +0000 UTC m=+1.200278347" watchObservedRunningTime="2024-12-13 01:12:20.50802654 +0000 UTC m=+1.201256491" Dec 13 01:12:20.508280 kubelet[2603]: I1213 01:12:20.508178 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.508173397 podStartE2EDuration="1.508173397s" podCreationTimestamp="2024-12-13 01:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:12:20.496882206 +0000 UTC m=+1.190112157" watchObservedRunningTime="2024-12-13 01:12:20.508173397 +0000 UTC m=+1.201403348" Dec 13 01:12:20.516656 kubelet[2603]: I1213 01:12:20.516586 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.516552898 podStartE2EDuration="1.516552898s" podCreationTimestamp="2024-12-13 01:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:12:20.515907631 +0000 UTC m=+1.209137582" watchObservedRunningTime="2024-12-13 01:12:20.516552898 +0000 UTC m=+1.209782849" Dec 13 01:12:20.999883 sudo[2640]: pam_unix(sudo:session): session closed for user root Dec 13 01:12:21.412711 kubelet[2603]: E1213 01:12:21.412672 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:21.413392 kubelet[2603]: E1213 01:12:21.413359 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:21.413898 kubelet[2603]: E1213 01:12:21.413869 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:22.414590 kubelet[2603]: E1213 01:12:22.414541 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:22.564657 sudo[1652]: pam_unix(sudo:session): session closed for user root Dec 13 01:12:22.566612 sshd[1649]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:22.570859 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:48246.service: Deactivated successfully. Dec 13 01:12:22.573163 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:12:22.573352 systemd[1]: session-9.scope: Consumed 5.406s CPU time, 194.2M memory peak, 0B memory swap peak. Dec 13 01:12:22.573867 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:12:22.575002 systemd-logind[1448]: Removed session 9. Dec 13 01:12:23.416245 kubelet[2603]: E1213 01:12:23.416196 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:24.417080 kubelet[2603]: E1213 01:12:24.417044 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:24.768373 update_engine[1453]: I20241213 01:12:24.768201 1453 update_attempter.cc:509] Updating boot flags... Dec 13 01:12:24.933132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2690) Dec 13 01:12:24.974131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2689) Dec 13 01:12:30.426865 kubelet[2603]: E1213 01:12:30.426823 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:31.586391 kubelet[2603]: E1213 01:12:31.586356 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:32.429350 kubelet[2603]: E1213 01:12:32.429300 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:33.471861 kubelet[2603]: I1213 01:12:33.471814 2603 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:12:33.472316 containerd[1459]: time="2024-12-13T01:12:33.472278906Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:12:33.472573 kubelet[2603]: I1213 01:12:33.472484 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:12:34.345165 kubelet[2603]: I1213 01:12:34.343546 2603 topology_manager.go:215] "Topology Admit Handler" podUID="7b79b421-86c8-4f16-acba-f5119418c50b" podNamespace="kube-system" podName="kube-proxy-6dfm5" Dec 13 01:12:34.353375 systemd[1]: Created slice kubepods-besteffort-pod7b79b421_86c8_4f16_acba_f5119418c50b.slice - libcontainer container kubepods-besteffort-pod7b79b421_86c8_4f16_acba_f5119418c50b.slice. Dec 13 01:12:34.358758 kubelet[2603]: I1213 01:12:34.357758 2603 topology_manager.go:215] "Topology Admit Handler" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" podNamespace="kube-system" podName="cilium-n627b" Dec 13 01:12:34.367060 systemd[1]: Created slice kubepods-burstable-pod3fbb975e_3cf2_4d15_9c37_b76802b6dcae.slice - libcontainer container kubepods-burstable-pod3fbb975e_3cf2_4d15_9c37_b76802b6dcae.slice. Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379280 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-etc-cni-netd\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379321 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbccr\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-kube-api-access-rbccr\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379339 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b79b421-86c8-4f16-acba-f5119418c50b-xtables-lock\") pod \"kube-proxy-6dfm5\" (UID: \"7b79b421-86c8-4f16-acba-f5119418c50b\") " pod="kube-system/kube-proxy-6dfm5" Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379355 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hostproc\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379369 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-config-path\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380020 kubelet[2603]: I1213 01:12:34.379383 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b79b421-86c8-4f16-acba-f5119418c50b-kube-proxy\") pod \"kube-proxy-6dfm5\" (UID: \"7b79b421-86c8-4f16-acba-f5119418c50b\") " pod="kube-system/kube-proxy-6dfm5" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379398 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-run\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379412 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hubble-tls\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379424 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-xtables-lock\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379441 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-kernel\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379456 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-net\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380344 kubelet[2603]: I1213 01:12:34.379476 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-bpf-maps\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379489 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-cgroup\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379502 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b79b421-86c8-4f16-acba-f5119418c50b-lib-modules\") pod \"kube-proxy-6dfm5\" (UID: \"7b79b421-86c8-4f16-acba-f5119418c50b\") " pod="kube-system/kube-proxy-6dfm5" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379516 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6lh\" (UniqueName: \"kubernetes.io/projected/7b79b421-86c8-4f16-acba-f5119418c50b-kube-api-access-vl6lh\") pod \"kube-proxy-6dfm5\" (UID: \"7b79b421-86c8-4f16-acba-f5119418c50b\") " pod="kube-system/kube-proxy-6dfm5" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379529 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cni-path\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379546 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-lib-modules\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.380556 kubelet[2603]: I1213 01:12:34.379564 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-clustermesh-secrets\") pod \"cilium-n627b\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " pod="kube-system/cilium-n627b" Dec 13 01:12:34.420521 kubelet[2603]: E1213 01:12:34.420477 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.553120 kubelet[2603]: I1213 01:12:34.550845 2603 topology_manager.go:215] "Topology Admit Handler" podUID="59351a91-146a-4b1b-8320-ffb2ac0f06f7" podNamespace="kube-system" podName="cilium-operator-599987898-hmsdr" Dec 13 01:12:34.561193 systemd[1]: Created slice kubepods-besteffort-pod59351a91_146a_4b1b_8320_ffb2ac0f06f7.slice - libcontainer container kubepods-besteffort-pod59351a91_146a_4b1b_8320_ffb2ac0f06f7.slice. Dec 13 01:12:34.581471 kubelet[2603]: I1213 01:12:34.581425 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59351a91-146a-4b1b-8320-ffb2ac0f06f7-cilium-config-path\") pod \"cilium-operator-599987898-hmsdr\" (UID: \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\") " pod="kube-system/cilium-operator-599987898-hmsdr" Dec 13 01:12:34.581471 kubelet[2603]: I1213 01:12:34.581470 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp494\" (UniqueName: \"kubernetes.io/projected/59351a91-146a-4b1b-8320-ffb2ac0f06f7-kube-api-access-hp494\") pod \"cilium-operator-599987898-hmsdr\" (UID: \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\") " pod="kube-system/cilium-operator-599987898-hmsdr" Dec 13 01:12:34.662310 kubelet[2603]: E1213 01:12:34.662280 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.663066 containerd[1459]: time="2024-12-13T01:12:34.663013928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dfm5,Uid:7b79b421-86c8-4f16-acba-f5119418c50b,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:34.670782 kubelet[2603]: E1213 01:12:34.670747 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.671132 containerd[1459]: time="2024-12-13T01:12:34.671084056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n627b,Uid:3fbb975e-3cf2-4d15-9c37-b76802b6dcae,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:34.692244 containerd[1459]: time="2024-12-13T01:12:34.692156161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:34.692244 containerd[1459]: time="2024-12-13T01:12:34.692210924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:34.692244 containerd[1459]: time="2024-12-13T01:12:34.692225021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.699149 containerd[1459]: time="2024-12-13T01:12:34.698579314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.706299 containerd[1459]: time="2024-12-13T01:12:34.706195379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:34.706517 containerd[1459]: time="2024-12-13T01:12:34.706263036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:34.706517 containerd[1459]: time="2024-12-13T01:12:34.706293503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.706517 containerd[1459]: time="2024-12-13T01:12:34.706436692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.718297 systemd[1]: Started cri-containerd-3f82bc708aa26e40a89e92342ec2966c21c374b508807296485baba251220103.scope - libcontainer container 3f82bc708aa26e40a89e92342ec2966c21c374b508807296485baba251220103. Dec 13 01:12:34.722360 systemd[1]: Started cri-containerd-5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09.scope - libcontainer container 5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09. Dec 13 01:12:34.745979 containerd[1459]: time="2024-12-13T01:12:34.745845973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dfm5,Uid:7b79b421-86c8-4f16-acba-f5119418c50b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f82bc708aa26e40a89e92342ec2966c21c374b508807296485baba251220103\"" Dec 13 01:12:34.747563 kubelet[2603]: E1213 01:12:34.747475 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.747664 containerd[1459]: time="2024-12-13T01:12:34.747530128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n627b,Uid:3fbb975e-3cf2-4d15-9c37-b76802b6dcae,Namespace:kube-system,Attempt:0,} returns sandbox id \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\"" Dec 13 01:12:34.750430 containerd[1459]: time="2024-12-13T01:12:34.750396676Z" level=info msg="CreateContainer within sandbox \"3f82bc708aa26e40a89e92342ec2966c21c374b508807296485baba251220103\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:12:34.750781 kubelet[2603]: E1213 01:12:34.750756 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.752011 containerd[1459]: time="2024-12-13T01:12:34.751923276Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:12:34.772032 containerd[1459]: time="2024-12-13T01:12:34.771980103Z" level=info msg="CreateContainer within sandbox \"3f82bc708aa26e40a89e92342ec2966c21c374b508807296485baba251220103\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e51beef1794ba53a0f3fe7afc50a204a9c8abab8a167a5b2ba5d77a957551de1\"" Dec 13 01:12:34.772525 containerd[1459]: time="2024-12-13T01:12:34.772491214Z" level=info msg="StartContainer for \"e51beef1794ba53a0f3fe7afc50a204a9c8abab8a167a5b2ba5d77a957551de1\"" Dec 13 01:12:34.800219 systemd[1]: Started cri-containerd-e51beef1794ba53a0f3fe7afc50a204a9c8abab8a167a5b2ba5d77a957551de1.scope - libcontainer container e51beef1794ba53a0f3fe7afc50a204a9c8abab8a167a5b2ba5d77a957551de1. Dec 13 01:12:34.834345 containerd[1459]: time="2024-12-13T01:12:34.834300956Z" level=info msg="StartContainer for \"e51beef1794ba53a0f3fe7afc50a204a9c8abab8a167a5b2ba5d77a957551de1\" returns successfully" Dec 13 01:12:34.864692 kubelet[2603]: E1213 01:12:34.864646 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:34.865435 containerd[1459]: time="2024-12-13T01:12:34.865404797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hmsdr,Uid:59351a91-146a-4b1b-8320-ffb2ac0f06f7,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:34.892766 containerd[1459]: time="2024-12-13T01:12:34.892654474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:12:34.892766 containerd[1459]: time="2024-12-13T01:12:34.892749423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:12:34.892926 containerd[1459]: time="2024-12-13T01:12:34.892769661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.893891 containerd[1459]: time="2024-12-13T01:12:34.893723003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:12:34.913329 systemd[1]: Started cri-containerd-d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48.scope - libcontainer container d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48. Dec 13 01:12:34.953968 containerd[1459]: time="2024-12-13T01:12:34.953923842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hmsdr,Uid:59351a91-146a-4b1b-8320-ffb2ac0f06f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\"" Dec 13 01:12:34.954731 kubelet[2603]: E1213 01:12:34.954707 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:35.434483 kubelet[2603]: E1213 01:12:35.433637 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:35.441364 kubelet[2603]: I1213 01:12:35.441307 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6dfm5" podStartSLOduration=1.441284026 podStartE2EDuration="1.441284026s" podCreationTimestamp="2024-12-13 01:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:12:35.440547882 +0000 UTC m=+16.133777843" watchObservedRunningTime="2024-12-13 01:12:35.441284026 +0000 UTC m=+16.134513987" Dec 13 01:12:47.657984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220389825.mount: Deactivated successfully. Dec 13 01:12:49.123353 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:53070.service - OpenSSH per-connection server daemon (10.0.0.1:53070). Dec 13 01:12:49.157609 sshd[3006]: Accepted publickey for core from 10.0.0.1 port 53070 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:49.159536 sshd[3006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:49.164265 systemd-logind[1448]: New session 10 of user core. Dec 13 01:12:49.171218 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:12:49.307649 sshd[3006]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:49.311597 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:53070.service: Deactivated successfully. Dec 13 01:12:49.313779 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:12:49.315547 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:12:49.316502 systemd-logind[1448]: Removed session 10. Dec 13 01:12:50.033918 containerd[1459]: time="2024-12-13T01:12:50.033859174Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:50.034679 containerd[1459]: time="2024-12-13T01:12:50.034611156Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734723" Dec 13 01:12:50.035938 containerd[1459]: time="2024-12-13T01:12:50.035890597Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:50.051803 containerd[1459]: time="2024-12-13T01:12:50.051737249Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.299770852s" Dec 13 01:12:50.051803 containerd[1459]: time="2024-12-13T01:12:50.051795999Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:12:50.053529 containerd[1459]: time="2024-12-13T01:12:50.053485641Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:12:50.056021 containerd[1459]: time="2024-12-13T01:12:50.055971686Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:12:50.066939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646864780.mount: Deactivated successfully. Dec 13 01:12:50.068402 containerd[1459]: time="2024-12-13T01:12:50.068364666Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\"" Dec 13 01:12:50.069108 containerd[1459]: time="2024-12-13T01:12:50.068705035Z" level=info msg="StartContainer for \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\"" Dec 13 01:12:50.093130 systemd[1]: run-containerd-runc-k8s.io-8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28-runc.pkihAv.mount: Deactivated successfully. Dec 13 01:12:50.110235 systemd[1]: Started cri-containerd-8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28.scope - libcontainer container 8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28. Dec 13 01:12:50.139264 containerd[1459]: time="2024-12-13T01:12:50.139217096Z" level=info msg="StartContainer for \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\" returns successfully" Dec 13 01:12:50.150691 systemd[1]: cri-containerd-8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28.scope: Deactivated successfully. Dec 13 01:12:50.458368 kubelet[2603]: E1213 01:12:50.458328 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:50.626340 containerd[1459]: time="2024-12-13T01:12:50.626285527Z" level=info msg="shim disconnected" id=8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28 namespace=k8s.io Dec 13 01:12:50.626340 containerd[1459]: time="2024-12-13T01:12:50.626337915Z" level=warning msg="cleaning up after shim disconnected" id=8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28 namespace=k8s.io Dec 13 01:12:50.626340 containerd[1459]: time="2024-12-13T01:12:50.626346240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:51.064844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28-rootfs.mount: Deactivated successfully. Dec 13 01:12:51.461179 kubelet[2603]: E1213 01:12:51.461110 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:51.463331 containerd[1459]: time="2024-12-13T01:12:51.463285501Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:12:51.479922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392479389.mount: Deactivated successfully. Dec 13 01:12:51.480995 containerd[1459]: time="2024-12-13T01:12:51.480952308Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\"" Dec 13 01:12:51.481611 containerd[1459]: time="2024-12-13T01:12:51.481577761Z" level=info msg="StartContainer for \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\"" Dec 13 01:12:51.515228 systemd[1]: Started cri-containerd-4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300.scope - libcontainer container 4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300. Dec 13 01:12:51.540876 containerd[1459]: time="2024-12-13T01:12:51.540829471Z" level=info msg="StartContainer for \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\" returns successfully" Dec 13 01:12:51.552535 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:12:51.553060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:12:51.553181 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:12:51.559384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:12:51.559599 systemd[1]: cri-containerd-4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300.scope: Deactivated successfully. Dec 13 01:12:51.581187 containerd[1459]: time="2024-12-13T01:12:51.581122111Z" level=info msg="shim disconnected" id=4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300 namespace=k8s.io Dec 13 01:12:51.581187 containerd[1459]: time="2024-12-13T01:12:51.581183517Z" level=warning msg="cleaning up after shim disconnected" id=4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300 namespace=k8s.io Dec 13 01:12:51.581471 containerd[1459]: time="2024-12-13T01:12:51.581193585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:51.581871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:12:52.064960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300-rootfs.mount: Deactivated successfully. Dec 13 01:12:52.190861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082731882.mount: Deactivated successfully. Dec 13 01:12:52.459046 containerd[1459]: time="2024-12-13T01:12:52.458995871Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:52.459651 containerd[1459]: time="2024-12-13T01:12:52.459600045Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907197" Dec 13 01:12:52.460758 containerd[1459]: time="2024-12-13T01:12:52.460720338Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:12:52.461984 containerd[1459]: time="2024-12-13T01:12:52.461948412Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.408431142s" Dec 13 01:12:52.461984 containerd[1459]: time="2024-12-13T01:12:52.461979961Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:12:52.463885 containerd[1459]: time="2024-12-13T01:12:52.463851634Z" level=info msg="CreateContainer within sandbox \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:12:52.464984 kubelet[2603]: E1213 01:12:52.464952 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:52.467084 containerd[1459]: time="2024-12-13T01:12:52.467056979Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:12:52.485429 containerd[1459]: time="2024-12-13T01:12:52.485375377Z" level=info msg="CreateContainer within sandbox \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\"" Dec 13 01:12:52.486461 containerd[1459]: time="2024-12-13T01:12:52.486063408Z" level=info msg="StartContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\"" Dec 13 01:12:52.492573 containerd[1459]: time="2024-12-13T01:12:52.492506048Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\"" Dec 13 01:12:52.493509 containerd[1459]: time="2024-12-13T01:12:52.493471390Z" level=info msg="StartContainer for \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\"" Dec 13 01:12:52.515376 systemd[1]: Started cri-containerd-268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2.scope - libcontainer container 268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2. Dec 13 01:12:52.521237 systemd[1]: Started cri-containerd-c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84.scope - libcontainer container c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84. Dec 13 01:12:52.547230 containerd[1459]: time="2024-12-13T01:12:52.547190904Z" level=info msg="StartContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" returns successfully" Dec 13 01:12:52.557699 containerd[1459]: time="2024-12-13T01:12:52.557558490Z" level=info msg="StartContainer for \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\" returns successfully" Dec 13 01:12:52.557942 systemd[1]: cri-containerd-c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84.scope: Deactivated successfully. Dec 13 01:12:52.809581 containerd[1459]: time="2024-12-13T01:12:52.808668008Z" level=info msg="shim disconnected" id=c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84 namespace=k8s.io Dec 13 01:12:52.809581 containerd[1459]: time="2024-12-13T01:12:52.808731517Z" level=warning msg="cleaning up after shim disconnected" id=c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84 namespace=k8s.io Dec 13 01:12:52.809581 containerd[1459]: time="2024-12-13T01:12:52.808740484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:53.471659 kubelet[2603]: E1213 01:12:53.471558 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:53.474103 kubelet[2603]: E1213 01:12:53.474045 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:53.475998 containerd[1459]: time="2024-12-13T01:12:53.475951717Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:12:53.483065 kubelet[2603]: I1213 01:12:53.482984 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hmsdr" podStartSLOduration=1.975679338 podStartE2EDuration="19.482963625s" podCreationTimestamp="2024-12-13 01:12:34 +0000 UTC" firstStartedPulling="2024-12-13 01:12:34.955280131 +0000 UTC m=+15.648510082" lastFinishedPulling="2024-12-13 01:12:52.462564418 +0000 UTC m=+33.155794369" observedRunningTime="2024-12-13 01:12:53.480383984 +0000 UTC m=+34.173613935" watchObservedRunningTime="2024-12-13 01:12:53.482963625 +0000 UTC m=+34.176193576" Dec 13 01:12:53.495692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873078305.mount: Deactivated successfully. Dec 13 01:12:53.499822 containerd[1459]: time="2024-12-13T01:12:53.499766867Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\"" Dec 13 01:12:53.500429 containerd[1459]: time="2024-12-13T01:12:53.500392933Z" level=info msg="StartContainer for \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\"" Dec 13 01:12:53.552220 systemd[1]: Started cri-containerd-4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82.scope - libcontainer container 4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82. Dec 13 01:12:53.577650 systemd[1]: cri-containerd-4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82.scope: Deactivated successfully. Dec 13 01:12:53.580082 containerd[1459]: time="2024-12-13T01:12:53.580034667Z" level=info msg="StartContainer for \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\" returns successfully" Dec 13 01:12:53.605470 containerd[1459]: time="2024-12-13T01:12:53.605377615Z" level=info msg="shim disconnected" id=4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82 namespace=k8s.io Dec 13 01:12:53.605470 containerd[1459]: time="2024-12-13T01:12:53.605453507Z" level=warning msg="cleaning up after shim disconnected" id=4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82 namespace=k8s.io Dec 13 01:12:53.605470 containerd[1459]: time="2024-12-13T01:12:53.605463105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:12:54.064630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82-rootfs.mount: Deactivated successfully. Dec 13 01:12:54.319486 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:53086.service - OpenSSH per-connection server daemon (10.0.0.1:53086). Dec 13 01:12:54.361079 sshd[3319]: Accepted publickey for core from 10.0.0.1 port 53086 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:54.362633 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:54.366951 systemd-logind[1448]: New session 11 of user core. Dec 13 01:12:54.378223 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:12:54.479487 kubelet[2603]: E1213 01:12:54.479455 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:54.479999 kubelet[2603]: E1213 01:12:54.479567 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:54.483528 containerd[1459]: time="2024-12-13T01:12:54.483488394Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:12:54.512879 sshd[3319]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:54.516558 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:53086.service: Deactivated successfully. Dec 13 01:12:54.518745 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:12:54.519377 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:12:54.520436 systemd-logind[1448]: Removed session 11. Dec 13 01:12:54.522667 containerd[1459]: time="2024-12-13T01:12:54.522629052Z" level=info msg="CreateContainer within sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\"" Dec 13 01:12:54.523266 containerd[1459]: time="2024-12-13T01:12:54.523171270Z" level=info msg="StartContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\"" Dec 13 01:12:54.579213 systemd[1]: Started cri-containerd-b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e.scope - libcontainer container b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e. Dec 13 01:12:54.611780 containerd[1459]: time="2024-12-13T01:12:54.611734118Z" level=info msg="StartContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" returns successfully" Dec 13 01:12:54.780117 kubelet[2603]: I1213 01:12:54.778847 2603 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:12:54.799502 kubelet[2603]: I1213 01:12:54.799442 2603 topology_manager.go:215] "Topology Admit Handler" podUID="980d502a-e7ed-4e86-98d8-f295c247a548" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l2xcx" Dec 13 01:12:54.801041 kubelet[2603]: I1213 01:12:54.800992 2603 topology_manager.go:215] "Topology Admit Handler" podUID="9897a452-6a22-4e4e-953c-91e23388f526" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qshw8" Dec 13 01:12:54.809142 systemd[1]: Created slice kubepods-burstable-pod980d502a_e7ed_4e86_98d8_f295c247a548.slice - libcontainer container kubepods-burstable-pod980d502a_e7ed_4e86_98d8_f295c247a548.slice. Dec 13 01:12:54.815926 systemd[1]: Created slice kubepods-burstable-pod9897a452_6a22_4e4e_953c_91e23388f526.slice - libcontainer container kubepods-burstable-pod9897a452_6a22_4e4e_953c_91e23388f526.slice. Dec 13 01:12:54.921787 kubelet[2603]: I1213 01:12:54.921745 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpt8v\" (UniqueName: \"kubernetes.io/projected/980d502a-e7ed-4e86-98d8-f295c247a548-kube-api-access-kpt8v\") pod \"coredns-7db6d8ff4d-l2xcx\" (UID: \"980d502a-e7ed-4e86-98d8-f295c247a548\") " pod="kube-system/coredns-7db6d8ff4d-l2xcx" Dec 13 01:12:54.921787 kubelet[2603]: I1213 01:12:54.921793 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/980d502a-e7ed-4e86-98d8-f295c247a548-config-volume\") pod \"coredns-7db6d8ff4d-l2xcx\" (UID: \"980d502a-e7ed-4e86-98d8-f295c247a548\") " pod="kube-system/coredns-7db6d8ff4d-l2xcx" Dec 13 01:12:54.921987 kubelet[2603]: I1213 01:12:54.921816 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9897a452-6a22-4e4e-953c-91e23388f526-config-volume\") pod \"coredns-7db6d8ff4d-qshw8\" (UID: \"9897a452-6a22-4e4e-953c-91e23388f526\") " pod="kube-system/coredns-7db6d8ff4d-qshw8" Dec 13 01:12:54.921987 kubelet[2603]: I1213 01:12:54.921834 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzx42\" (UniqueName: \"kubernetes.io/projected/9897a452-6a22-4e4e-953c-91e23388f526-kube-api-access-zzx42\") pod \"coredns-7db6d8ff4d-qshw8\" (UID: \"9897a452-6a22-4e4e-953c-91e23388f526\") " pod="kube-system/coredns-7db6d8ff4d-qshw8" Dec 13 01:12:55.112970 kubelet[2603]: E1213 01:12:55.112920 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:55.113600 containerd[1459]: time="2024-12-13T01:12:55.113554108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2xcx,Uid:980d502a-e7ed-4e86-98d8-f295c247a548,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:55.119042 kubelet[2603]: E1213 01:12:55.119010 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:55.119739 containerd[1459]: time="2024-12-13T01:12:55.119689690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qshw8,Uid:9897a452-6a22-4e4e-953c-91e23388f526,Namespace:kube-system,Attempt:0,}" Dec 13 01:12:55.483727 kubelet[2603]: E1213 01:12:55.483700 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:55.496829 kubelet[2603]: I1213 01:12:55.496588 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n627b" podStartSLOduration=6.194834881 podStartE2EDuration="21.496567947s" podCreationTimestamp="2024-12-13 01:12:34 +0000 UTC" firstStartedPulling="2024-12-13 01:12:34.751579419 +0000 UTC m=+15.444809370" lastFinishedPulling="2024-12-13 01:12:50.053312485 +0000 UTC m=+30.746542436" observedRunningTime="2024-12-13 01:12:55.496343696 +0000 UTC m=+36.189573647" watchObservedRunningTime="2024-12-13 01:12:55.496567947 +0000 UTC m=+36.189797898" Dec 13 01:12:56.485339 kubelet[2603]: E1213 01:12:56.485305 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:56.768234 systemd-networkd[1396]: cilium_host: Link UP Dec 13 01:12:56.768411 systemd-networkd[1396]: cilium_net: Link UP Dec 13 01:12:56.768608 systemd-networkd[1396]: cilium_net: Gained carrier Dec 13 01:12:56.768777 systemd-networkd[1396]: cilium_host: Gained carrier Dec 13 01:12:56.768913 systemd-networkd[1396]: cilium_net: Gained IPv6LL Dec 13 01:12:56.769099 systemd-networkd[1396]: cilium_host: Gained IPv6LL Dec 13 01:12:56.867460 systemd-networkd[1396]: cilium_vxlan: Link UP Dec 13 01:12:56.867471 systemd-networkd[1396]: cilium_vxlan: Gained carrier Dec 13 01:12:57.069118 kernel: NET: Registered PF_ALG protocol family Dec 13 01:12:57.486894 kubelet[2603]: E1213 01:12:57.486859 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:57.698270 systemd-networkd[1396]: lxc_health: Link UP Dec 13 01:12:57.707408 systemd-networkd[1396]: lxc_health: Gained carrier Dec 13 01:12:57.951246 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Dec 13 01:12:58.225622 systemd-networkd[1396]: lxc00058e52ea18: Link UP Dec 13 01:12:58.236619 systemd-networkd[1396]: lxc95f839dd5d6f: Link UP Dec 13 01:12:58.244146 kernel: eth0: renamed from tmp7ba01 Dec 13 01:12:58.249124 kernel: eth0: renamed from tmpdc98a Dec 13 01:12:58.254040 systemd-networkd[1396]: lxc95f839dd5d6f: Gained carrier Dec 13 01:12:58.255748 systemd-networkd[1396]: lxc00058e52ea18: Gained carrier Dec 13 01:12:58.674731 kubelet[2603]: E1213 01:12:58.674701 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:59.487275 systemd-networkd[1396]: lxc_health: Gained IPv6LL Dec 13 01:12:59.489789 kubelet[2603]: E1213 01:12:59.489755 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:12:59.529422 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:44714.service - OpenSSH per-connection server daemon (10.0.0.1:44714). Dec 13 01:12:59.571423 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 44714 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:12:59.573273 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:12:59.577330 systemd-logind[1448]: New session 12 of user core. Dec 13 01:12:59.587316 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:12:59.701778 sshd[3856]: pam_unix(sshd:session): session closed for user core Dec 13 01:12:59.706076 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:44714.service: Deactivated successfully. Dec 13 01:12:59.708190 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:12:59.708764 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:12:59.709646 systemd-logind[1448]: Removed session 12. Dec 13 01:12:59.743277 systemd-networkd[1396]: lxc95f839dd5d6f: Gained IPv6LL Dec 13 01:12:59.871396 systemd-networkd[1396]: lxc00058e52ea18: Gained IPv6LL Dec 13 01:13:01.861412 containerd[1459]: time="2024-12-13T01:13:01.861260674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:13:01.861825 containerd[1459]: time="2024-12-13T01:13:01.861404133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:13:01.861825 containerd[1459]: time="2024-12-13T01:13:01.861462663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:01.861825 containerd[1459]: time="2024-12-13T01:13:01.861588619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:01.862924 containerd[1459]: time="2024-12-13T01:13:01.862821632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:13:01.862924 containerd[1459]: time="2024-12-13T01:13:01.862893447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:13:01.862924 containerd[1459]: time="2024-12-13T01:13:01.862908345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:01.863121 containerd[1459]: time="2024-12-13T01:13:01.862998674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:01.889261 systemd[1]: Started cri-containerd-7ba010ff6f5a6d49a1f0ac3f47fb87fc55f21eff680fdf988817302711dab40f.scope - libcontainer container 7ba010ff6f5a6d49a1f0ac3f47fb87fc55f21eff680fdf988817302711dab40f. Dec 13 01:13:01.890973 systemd[1]: Started cri-containerd-dc98ad2ff7e9b6aaadcb9bdea8224ed5c9696a0050bf8d011253f8f6482a940a.scope - libcontainer container dc98ad2ff7e9b6aaadcb9bdea8224ed5c9696a0050bf8d011253f8f6482a940a. Dec 13 01:13:01.903862 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:13:01.905920 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:13:01.932620 containerd[1459]: time="2024-12-13T01:13:01.932581077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qshw8,Uid:9897a452-6a22-4e4e-953c-91e23388f526,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc98ad2ff7e9b6aaadcb9bdea8224ed5c9696a0050bf8d011253f8f6482a940a\"" Dec 13 01:13:01.937150 kubelet[2603]: E1213 01:13:01.935362 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:01.940934 containerd[1459]: time="2024-12-13T01:13:01.940891178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2xcx,Uid:980d502a-e7ed-4e86-98d8-f295c247a548,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ba010ff6f5a6d49a1f0ac3f47fb87fc55f21eff680fdf988817302711dab40f\"" Dec 13 01:13:01.942230 kubelet[2603]: E1213 01:13:01.942159 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:01.944181 containerd[1459]: time="2024-12-13T01:13:01.944153989Z" level=info msg="CreateContainer within sandbox \"dc98ad2ff7e9b6aaadcb9bdea8224ed5c9696a0050bf8d011253f8f6482a940a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:13:01.945839 containerd[1459]: time="2024-12-13T01:13:01.945780521Z" level=info msg="CreateContainer within sandbox \"7ba010ff6f5a6d49a1f0ac3f47fb87fc55f21eff680fdf988817302711dab40f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:13:01.974775 containerd[1459]: time="2024-12-13T01:13:01.974716988Z" level=info msg="CreateContainer within sandbox \"dc98ad2ff7e9b6aaadcb9bdea8224ed5c9696a0050bf8d011253f8f6482a940a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb82f8ac16447dc6020fc39f4f9cf02a91916a50b71660773ed1e444587f7e84\"" Dec 13 01:13:01.975460 containerd[1459]: time="2024-12-13T01:13:01.975427190Z" level=info msg="StartContainer for \"fb82f8ac16447dc6020fc39f4f9cf02a91916a50b71660773ed1e444587f7e84\"" Dec 13 01:13:01.979085 containerd[1459]: time="2024-12-13T01:13:01.979037233Z" level=info msg="CreateContainer within sandbox \"7ba010ff6f5a6d49a1f0ac3f47fb87fc55f21eff680fdf988817302711dab40f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"431409865269a8d8303c4eab86f9db16af07149c8c30c7aca9ddb81dde253435\"" Dec 13 01:13:01.979709 containerd[1459]: time="2024-12-13T01:13:01.979653539Z" level=info msg="StartContainer for \"431409865269a8d8303c4eab86f9db16af07149c8c30c7aca9ddb81dde253435\"" Dec 13 01:13:02.005251 systemd[1]: Started cri-containerd-fb82f8ac16447dc6020fc39f4f9cf02a91916a50b71660773ed1e444587f7e84.scope - libcontainer container fb82f8ac16447dc6020fc39f4f9cf02a91916a50b71660773ed1e444587f7e84. Dec 13 01:13:02.008151 systemd[1]: Started cri-containerd-431409865269a8d8303c4eab86f9db16af07149c8c30c7aca9ddb81dde253435.scope - libcontainer container 431409865269a8d8303c4eab86f9db16af07149c8c30c7aca9ddb81dde253435. Dec 13 01:13:02.040433 containerd[1459]: time="2024-12-13T01:13:02.040361852Z" level=info msg="StartContainer for \"431409865269a8d8303c4eab86f9db16af07149c8c30c7aca9ddb81dde253435\" returns successfully" Dec 13 01:13:02.040594 containerd[1459]: time="2024-12-13T01:13:02.040373063Z" level=info msg="StartContainer for \"fb82f8ac16447dc6020fc39f4f9cf02a91916a50b71660773ed1e444587f7e84\" returns successfully" Dec 13 01:13:02.495839 kubelet[2603]: E1213 01:13:02.495798 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:02.498153 kubelet[2603]: E1213 01:13:02.497567 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:02.526273 kubelet[2603]: I1213 01:13:02.526210 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l2xcx" podStartSLOduration=28.526193882 podStartE2EDuration="28.526193882s" podCreationTimestamp="2024-12-13 01:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:13:02.525409071 +0000 UTC m=+43.218639022" watchObservedRunningTime="2024-12-13 01:13:02.526193882 +0000 UTC m=+43.219423833" Dec 13 01:13:02.866731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487347441.mount: Deactivated successfully. Dec 13 01:13:03.499513 kubelet[2603]: E1213 01:13:03.499476 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:03.500059 kubelet[2603]: E1213 01:13:03.499616 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:04.500877 kubelet[2603]: E1213 01:13:04.500839 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:04.501323 kubelet[2603]: E1213 01:13:04.500906 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:04.714848 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:44720.service - OpenSSH per-connection server daemon (10.0.0.1:44720). Dec 13 01:13:04.754026 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 44720 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:04.755561 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:04.759598 systemd-logind[1448]: New session 13 of user core. Dec 13 01:13:04.774233 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:13:04.882355 sshd[4050]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:04.885812 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:44720.service: Deactivated successfully. Dec 13 01:13:04.887874 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:13:04.888451 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:13:04.889451 systemd-logind[1448]: Removed session 13. Dec 13 01:13:09.900911 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:37058.service - OpenSSH per-connection server daemon (10.0.0.1:37058). Dec 13 01:13:09.937138 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 37058 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:09.938652 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:09.942396 systemd-logind[1448]: New session 14 of user core. Dec 13 01:13:09.957216 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:13:10.063128 sshd[4069]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:10.072952 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:37058.service: Deactivated successfully. Dec 13 01:13:10.074719 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:13:10.076493 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:13:10.087328 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Dec 13 01:13:10.088454 systemd-logind[1448]: Removed session 14. Dec 13 01:13:10.120063 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:10.121518 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:10.125712 systemd-logind[1448]: New session 15 of user core. Dec 13 01:13:10.136391 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:13:10.274075 sshd[4084]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:10.283000 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:37074.service: Deactivated successfully. Dec 13 01:13:10.286204 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:13:10.287542 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:13:10.297606 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:37078.service - OpenSSH per-connection server daemon (10.0.0.1:37078). Dec 13 01:13:10.298653 systemd-logind[1448]: Removed session 15. Dec 13 01:13:10.335373 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 37078 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:10.336905 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:10.340660 systemd-logind[1448]: New session 16 of user core. Dec 13 01:13:10.351207 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:13:10.458367 sshd[4097]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:10.462281 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:37078.service: Deactivated successfully. Dec 13 01:13:10.464351 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:13:10.465044 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:13:10.465985 systemd-logind[1448]: Removed session 16. Dec 13 01:13:15.470240 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:37084.service - OpenSSH per-connection server daemon (10.0.0.1:37084). Dec 13 01:13:15.506462 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:15.507981 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:15.511917 systemd-logind[1448]: New session 17 of user core. Dec 13 01:13:15.519251 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:13:15.621071 sshd[4111]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:15.625084 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:37084.service: Deactivated successfully. Dec 13 01:13:15.627183 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:13:15.627752 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:13:15.628767 systemd-logind[1448]: Removed session 17. Dec 13 01:13:20.636026 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:41532.service - OpenSSH per-connection server daemon (10.0.0.1:41532). Dec 13 01:13:20.672432 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 41532 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:20.673770 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:20.677327 systemd-logind[1448]: New session 18 of user core. Dec 13 01:13:20.687211 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:13:20.791511 sshd[4127]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:20.802156 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:41532.service: Deactivated successfully. Dec 13 01:13:20.804066 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:13:20.805686 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:13:20.810524 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:41540.service - OpenSSH per-connection server daemon (10.0.0.1:41540). Dec 13 01:13:20.811476 systemd-logind[1448]: Removed session 18. Dec 13 01:13:20.844561 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 41540 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:20.846355 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:20.851025 systemd-logind[1448]: New session 19 of user core. Dec 13 01:13:20.860222 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:13:21.088907 sshd[4142]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:21.104154 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:41540.service: Deactivated successfully. Dec 13 01:13:21.106158 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:13:21.107978 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:13:21.117394 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:41554.service - OpenSSH per-connection server daemon (10.0.0.1:41554). Dec 13 01:13:21.118280 systemd-logind[1448]: Removed session 19. Dec 13 01:13:21.149458 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 41554 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:21.151122 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:21.155210 systemd-logind[1448]: New session 20 of user core. Dec 13 01:13:21.166237 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:13:22.439129 sshd[4154]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:22.449554 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:41554.service: Deactivated successfully. Dec 13 01:13:22.452429 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:13:22.454002 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:13:22.459371 systemd[1]: Started sshd@20-10.0.0.86:22-10.0.0.1:41570.service - OpenSSH per-connection server daemon (10.0.0.1:41570). Dec 13 01:13:22.460448 systemd-logind[1448]: Removed session 20. Dec 13 01:13:22.499469 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 41570 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:22.501121 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:22.504986 systemd-logind[1448]: New session 21 of user core. Dec 13 01:13:22.516209 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:13:22.726271 sshd[4176]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:22.736446 systemd[1]: sshd@20-10.0.0.86:22-10.0.0.1:41570.service: Deactivated successfully. Dec 13 01:13:22.738510 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:13:22.739857 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:13:22.749341 systemd[1]: Started sshd@21-10.0.0.86:22-10.0.0.1:41584.service - OpenSSH per-connection server daemon (10.0.0.1:41584). Dec 13 01:13:22.750142 systemd-logind[1448]: Removed session 21. Dec 13 01:13:22.782890 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 41584 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:22.784352 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:22.788537 systemd-logind[1448]: New session 22 of user core. Dec 13 01:13:22.797197 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:13:22.910227 sshd[4188]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:22.913802 systemd[1]: sshd@21-10.0.0.86:22-10.0.0.1:41584.service: Deactivated successfully. Dec 13 01:13:22.915870 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:13:22.916606 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:13:22.917508 systemd-logind[1448]: Removed session 22. Dec 13 01:13:27.921929 systemd[1]: Started sshd@22-10.0.0.86:22-10.0.0.1:41596.service - OpenSSH per-connection server daemon (10.0.0.1:41596). Dec 13 01:13:27.958655 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 41596 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:27.960128 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:27.963955 systemd-logind[1448]: New session 23 of user core. Dec 13 01:13:27.974224 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:13:28.075935 sshd[4203]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:28.080192 systemd[1]: sshd@22-10.0.0.86:22-10.0.0.1:41596.service: Deactivated successfully. Dec 13 01:13:28.082177 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:13:28.082829 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:13:28.083667 systemd-logind[1448]: Removed session 23. Dec 13 01:13:33.088158 systemd[1]: Started sshd@23-10.0.0.86:22-10.0.0.1:60192.service - OpenSSH per-connection server daemon (10.0.0.1:60192). Dec 13 01:13:33.125082 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 60192 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:33.126788 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:33.130807 systemd-logind[1448]: New session 24 of user core. Dec 13 01:13:33.137240 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:13:33.241971 sshd[4220]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:33.246187 systemd[1]: sshd@23-10.0.0.86:22-10.0.0.1:60192.service: Deactivated successfully. Dec 13 01:13:33.248286 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:13:33.249000 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:13:33.249945 systemd-logind[1448]: Removed session 24. Dec 13 01:13:38.254206 systemd[1]: Started sshd@24-10.0.0.86:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Dec 13 01:13:38.292857 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:38.294720 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:38.298560 systemd-logind[1448]: New session 25 of user core. Dec 13 01:13:38.309216 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:13:38.411167 sshd[4237]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:38.415175 systemd[1]: sshd@24-10.0.0.86:22-10.0.0.1:50186.service: Deactivated successfully. Dec 13 01:13:38.417217 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:13:38.417768 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:13:38.418782 systemd-logind[1448]: Removed session 25. Dec 13 01:13:39.395380 kubelet[2603]: E1213 01:13:39.395334 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:43.395052 kubelet[2603]: E1213 01:13:43.394992 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:43.422064 systemd[1]: Started sshd@25-10.0.0.86:22-10.0.0.1:50202.service - OpenSSH per-connection server daemon (10.0.0.1:50202). Dec 13 01:13:43.459292 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 50202 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:43.460831 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:43.464791 systemd-logind[1448]: New session 26 of user core. Dec 13 01:13:43.478244 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:13:43.582014 sshd[4251]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:43.597957 systemd[1]: sshd@25-10.0.0.86:22-10.0.0.1:50202.service: Deactivated successfully. Dec 13 01:13:43.600024 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:13:43.601815 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:13:43.607331 systemd[1]: Started sshd@26-10.0.0.86:22-10.0.0.1:50208.service - OpenSSH per-connection server daemon (10.0.0.1:50208). Dec 13 01:13:43.608488 systemd-logind[1448]: Removed session 26. Dec 13 01:13:43.640661 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 50208 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:43.642153 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:43.646192 systemd-logind[1448]: New session 27 of user core. Dec 13 01:13:43.656223 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:13:44.982630 kubelet[2603]: I1213 01:13:44.981526 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qshw8" podStartSLOduration=70.981507317 podStartE2EDuration="1m10.981507317s" podCreationTimestamp="2024-12-13 01:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:13:02.664920299 +0000 UTC m=+43.358150250" watchObservedRunningTime="2024-12-13 01:13:44.981507317 +0000 UTC m=+85.674737268" Dec 13 01:13:44.991737 containerd[1459]: time="2024-12-13T01:13:44.991667908Z" level=info msg="StopContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" with timeout 30 (s)" Dec 13 01:13:44.992289 containerd[1459]: time="2024-12-13T01:13:44.992074468Z" level=info msg="Stop container \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" with signal terminated" Dec 13 01:13:45.007283 systemd[1]: cri-containerd-268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2.scope: Deactivated successfully. Dec 13 01:13:45.016957 containerd[1459]: time="2024-12-13T01:13:45.016892027Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:13:45.020069 containerd[1459]: time="2024-12-13T01:13:45.020010071Z" level=info msg="StopContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" with timeout 2 (s)" Dec 13 01:13:45.020310 containerd[1459]: time="2024-12-13T01:13:45.020264632Z" level=info msg="Stop container \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" with signal terminated" Dec 13 01:13:45.027517 systemd-networkd[1396]: lxc_health: Link DOWN Dec 13 01:13:45.027526 systemd-networkd[1396]: lxc_health: Lost carrier Dec 13 01:13:45.035177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2-rootfs.mount: Deactivated successfully. Dec 13 01:13:45.044019 containerd[1459]: time="2024-12-13T01:13:45.043956630Z" level=info msg="shim disconnected" id=268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2 namespace=k8s.io Dec 13 01:13:45.044301 containerd[1459]: time="2024-12-13T01:13:45.044269952Z" level=warning msg="cleaning up after shim disconnected" id=268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2 namespace=k8s.io Dec 13 01:13:45.044301 containerd[1459]: time="2024-12-13T01:13:45.044290671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:45.054583 systemd[1]: cri-containerd-b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e.scope: Deactivated successfully. Dec 13 01:13:45.054964 systemd[1]: cri-containerd-b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e.scope: Consumed 6.838s CPU time. Dec 13 01:13:45.064703 containerd[1459]: time="2024-12-13T01:13:45.064656472Z" level=info msg="StopContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" returns successfully" Dec 13 01:13:45.069049 containerd[1459]: time="2024-12-13T01:13:45.069002206Z" level=info msg="StopPodSandbox for \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\"" Dec 13 01:13:45.069194 containerd[1459]: time="2024-12-13T01:13:45.069057691Z" level=info msg="Container to stop \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.072686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48-shm.mount: Deactivated successfully. Dec 13 01:13:45.078371 systemd[1]: cri-containerd-d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48.scope: Deactivated successfully. Dec 13 01:13:45.081898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e-rootfs.mount: Deactivated successfully. Dec 13 01:13:45.091710 containerd[1459]: time="2024-12-13T01:13:45.091640022Z" level=info msg="shim disconnected" id=b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e namespace=k8s.io Dec 13 01:13:45.091710 containerd[1459]: time="2024-12-13T01:13:45.091689876Z" level=warning msg="cleaning up after shim disconnected" id=b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e namespace=k8s.io Dec 13 01:13:45.091710 containerd[1459]: time="2024-12-13T01:13:45.091698102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:45.103730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48-rootfs.mount: Deactivated successfully. Dec 13 01:13:45.105444 containerd[1459]: time="2024-12-13T01:13:45.105385356Z" level=info msg="shim disconnected" id=d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48 namespace=k8s.io Dec 13 01:13:45.105444 containerd[1459]: time="2024-12-13T01:13:45.105440520Z" level=warning msg="cleaning up after shim disconnected" id=d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48 namespace=k8s.io Dec 13 01:13:45.105590 containerd[1459]: time="2024-12-13T01:13:45.105449717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:45.109339 containerd[1459]: time="2024-12-13T01:13:45.109272805Z" level=info msg="StopContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" returns successfully" Dec 13 01:13:45.109822 containerd[1459]: time="2024-12-13T01:13:45.109793509Z" level=info msg="StopPodSandbox for \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\"" Dec 13 01:13:45.109894 containerd[1459]: time="2024-12-13T01:13:45.109835469Z" level=info msg="Container to stop \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.109894 containerd[1459]: time="2024-12-13T01:13:45.109848273Z" level=info msg="Container to stop \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.109894 containerd[1459]: time="2024-12-13T01:13:45.109857851Z" level=info msg="Container to stop \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.109894 containerd[1459]: time="2024-12-13T01:13:45.109868050Z" level=info msg="Container to stop \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.109894 containerd[1459]: time="2024-12-13T01:13:45.109876496Z" level=info msg="Container to stop \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:13:45.117359 systemd[1]: cri-containerd-5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09.scope: Deactivated successfully. Dec 13 01:13:45.133380 containerd[1459]: time="2024-12-13T01:13:45.133338178Z" level=info msg="TearDown network for sandbox \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\" successfully" Dec 13 01:13:45.133510 containerd[1459]: time="2024-12-13T01:13:45.133491850Z" level=info msg="StopPodSandbox for \"d05a9aaa041f1e0ae320a3330ea3c907feda2247bf9a1796f5003b0103c1fc48\" returns successfully" Dec 13 01:13:45.146204 containerd[1459]: time="2024-12-13T01:13:45.145891781Z" level=info msg="shim disconnected" id=5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09 namespace=k8s.io Dec 13 01:13:45.146204 containerd[1459]: time="2024-12-13T01:13:45.145946815Z" level=warning msg="cleaning up after shim disconnected" id=5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09 namespace=k8s.io Dec 13 01:13:45.146204 containerd[1459]: time="2024-12-13T01:13:45.145955882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:45.161647 containerd[1459]: time="2024-12-13T01:13:45.161596188Z" level=info msg="TearDown network for sandbox \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" successfully" Dec 13 01:13:45.161647 containerd[1459]: time="2024-12-13T01:13:45.161643919Z" level=info msg="StopPodSandbox for \"5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09\" returns successfully" Dec 13 01:13:45.203558 kubelet[2603]: I1213 01:13:45.203510 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp494\" (UniqueName: \"kubernetes.io/projected/59351a91-146a-4b1b-8320-ffb2ac0f06f7-kube-api-access-hp494\") pod \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\" (UID: \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\") " Dec 13 01:13:45.203558 kubelet[2603]: I1213 01:13:45.203552 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59351a91-146a-4b1b-8320-ffb2ac0f06f7-cilium-config-path\") pod \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\" (UID: \"59351a91-146a-4b1b-8320-ffb2ac0f06f7\") " Dec 13 01:13:45.206885 kubelet[2603]: I1213 01:13:45.206851 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59351a91-146a-4b1b-8320-ffb2ac0f06f7-kube-api-access-hp494" (OuterVolumeSpecName: "kube-api-access-hp494") pod "59351a91-146a-4b1b-8320-ffb2ac0f06f7" (UID: "59351a91-146a-4b1b-8320-ffb2ac0f06f7"). InnerVolumeSpecName "kube-api-access-hp494". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:13:45.207034 kubelet[2603]: I1213 01:13:45.207003 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59351a91-146a-4b1b-8320-ffb2ac0f06f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59351a91-146a-4b1b-8320-ffb2ac0f06f7" (UID: "59351a91-146a-4b1b-8320-ffb2ac0f06f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304258 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbccr\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-kube-api-access-rbccr\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304303 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-clustermesh-secrets\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304322 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hostproc\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304345 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-config-path\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304359 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-etc-cni-netd\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304388 kubelet[2603]: I1213 01:13:45.304372 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-kernel\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304389 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hubble-tls\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304402 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-lib-modules\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304416 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-net\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304429 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-cgroup\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304457 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-run\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304651 kubelet[2603]: I1213 01:13:45.304472 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-xtables-lock\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304484 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-bpf-maps\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304496 2603 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cni-path\") pod \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\" (UID: \"3fbb975e-3cf2-4d15-9c37-b76802b6dcae\") " Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304527 2603 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hp494\" (UniqueName: \"kubernetes.io/projected/59351a91-146a-4b1b-8320-ffb2ac0f06f7-kube-api-access-hp494\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304536 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59351a91-146a-4b1b-8320-ffb2ac0f06f7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304605 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cni-path" (OuterVolumeSpecName: "cni-path") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304831 kubelet[2603]: I1213 01:13:45.304643 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304984 kubelet[2603]: I1213 01:13:45.304661 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304984 kubelet[2603]: I1213 01:13:45.304677 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304984 kubelet[2603]: I1213 01:13:45.304693 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304984 kubelet[2603]: I1213 01:13:45.304708 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.304984 kubelet[2603]: I1213 01:13:45.304726 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.305144 kubelet[2603]: I1213 01:13:45.304745 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.308733 kubelet[2603]: I1213 01:13:45.308666 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-kube-api-access-rbccr" (OuterVolumeSpecName: "kube-api-access-rbccr") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "kube-api-access-rbccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:13:45.308733 kubelet[2603]: I1213 01:13:45.308719 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hostproc" (OuterVolumeSpecName: "hostproc") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.308733 kubelet[2603]: I1213 01:13:45.308717 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:13:45.308955 kubelet[2603]: I1213 01:13:45.308744 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:13:45.308955 kubelet[2603]: I1213 01:13:45.308834 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:13:45.309542 kubelet[2603]: I1213 01:13:45.309489 2603 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fbb975e-3cf2-4d15-9c37-b76802b6dcae" (UID: "3fbb975e-3cf2-4d15-9c37-b76802b6dcae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:13:45.402975 systemd[1]: Removed slice kubepods-burstable-pod3fbb975e_3cf2_4d15_9c37_b76802b6dcae.slice - libcontainer container kubepods-burstable-pod3fbb975e_3cf2_4d15_9c37_b76802b6dcae.slice. Dec 13 01:13:45.403066 systemd[1]: kubepods-burstable-pod3fbb975e_3cf2_4d15_9c37_b76802b6dcae.slice: Consumed 6.941s CPU time. Dec 13 01:13:45.404122 systemd[1]: Removed slice kubepods-besteffort-pod59351a91_146a_4b1b_8320_ffb2ac0f06f7.slice - libcontainer container kubepods-besteffort-pod59351a91_146a_4b1b_8320_ffb2ac0f06f7.slice. Dec 13 01:13:45.404677 kubelet[2603]: I1213 01:13:45.404643 2603 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rbccr\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-kube-api-access-rbccr\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404677 kubelet[2603]: I1213 01:13:45.404671 2603 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404686 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404696 2603 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404706 2603 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404714 2603 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404722 2603 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404731 2603 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404740 2603 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.404812 kubelet[2603]: I1213 01:13:45.404749 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.405031 kubelet[2603]: I1213 01:13:45.404770 2603 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.405031 kubelet[2603]: I1213 01:13:45.404778 2603 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.405031 kubelet[2603]: I1213 01:13:45.404785 2603 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.405031 kubelet[2603]: I1213 01:13:45.404794 2603 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fbb975e-3cf2-4d15-9c37-b76802b6dcae-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:13:45.573899 kubelet[2603]: I1213 01:13:45.573709 2603 scope.go:117] "RemoveContainer" containerID="268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2" Dec 13 01:13:45.575956 containerd[1459]: time="2024-12-13T01:13:45.575764672Z" level=info msg="RemoveContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\"" Dec 13 01:13:45.587328 containerd[1459]: time="2024-12-13T01:13:45.587279490Z" level=info msg="RemoveContainer for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" returns successfully" Dec 13 01:13:45.587518 kubelet[2603]: I1213 01:13:45.587497 2603 scope.go:117] "RemoveContainer" containerID="268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2" Dec 13 01:13:45.591412 containerd[1459]: time="2024-12-13T01:13:45.591362709Z" level=error msg="ContainerStatus for \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\": not found" Dec 13 01:13:45.591595 kubelet[2603]: E1213 01:13:45.591553 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\": not found" containerID="268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2" Dec 13 01:13:45.591746 kubelet[2603]: I1213 01:13:45.591589 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2"} err="failed to get container status \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\": rpc error: code = NotFound desc = an error occurred when try to find container \"268c8d63b91cea7a6a83d347d849a6addc1f18a6e016429873c6394ec2707ae2\": not found" Dec 13 01:13:45.591746 kubelet[2603]: I1213 01:13:45.591666 2603 scope.go:117] "RemoveContainer" containerID="b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e" Dec 13 01:13:45.592832 containerd[1459]: time="2024-12-13T01:13:45.592792973Z" level=info msg="RemoveContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\"" Dec 13 01:13:45.599469 containerd[1459]: time="2024-12-13T01:13:45.598801281Z" level=info msg="RemoveContainer for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" returns successfully" Dec 13 01:13:45.599580 kubelet[2603]: I1213 01:13:45.599237 2603 scope.go:117] "RemoveContainer" containerID="4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82" Dec 13 01:13:45.603750 containerd[1459]: time="2024-12-13T01:13:45.603492499Z" level=info msg="RemoveContainer for \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\"" Dec 13 01:13:45.615217 containerd[1459]: time="2024-12-13T01:13:45.615153283Z" level=info msg="RemoveContainer for \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\" returns successfully" Dec 13 01:13:45.615899 kubelet[2603]: I1213 01:13:45.615519 2603 scope.go:117] "RemoveContainer" containerID="c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84" Dec 13 01:13:45.617456 containerd[1459]: time="2024-12-13T01:13:45.617410260Z" level=info msg="RemoveContainer for \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\"" Dec 13 01:13:45.623464 containerd[1459]: time="2024-12-13T01:13:45.623367632Z" level=info msg="RemoveContainer for \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\" returns successfully" Dec 13 01:13:45.623719 kubelet[2603]: I1213 01:13:45.623630 2603 scope.go:117] "RemoveContainer" containerID="4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300" Dec 13 01:13:45.625700 containerd[1459]: time="2024-12-13T01:13:45.625657601Z" level=info msg="RemoveContainer for \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\"" Dec 13 01:13:45.629115 containerd[1459]: time="2024-12-13T01:13:45.629058470Z" level=info msg="RemoveContainer for \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\" returns successfully" Dec 13 01:13:45.630215 kubelet[2603]: I1213 01:13:45.630176 2603 scope.go:117] "RemoveContainer" containerID="8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28" Dec 13 01:13:45.634400 containerd[1459]: time="2024-12-13T01:13:45.634178388Z" level=info msg="RemoveContainer for \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\"" Dec 13 01:13:45.637468 containerd[1459]: time="2024-12-13T01:13:45.637425447Z" level=info msg="RemoveContainer for \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\" returns successfully" Dec 13 01:13:45.637641 kubelet[2603]: I1213 01:13:45.637557 2603 scope.go:117] "RemoveContainer" containerID="b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e" Dec 13 01:13:45.637722 containerd[1459]: time="2024-12-13T01:13:45.637693884Z" level=error msg="ContainerStatus for \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\": not found" Dec 13 01:13:45.637864 kubelet[2603]: E1213 01:13:45.637812 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\": not found" containerID="b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e" Dec 13 01:13:45.637864 kubelet[2603]: I1213 01:13:45.637853 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e"} err="failed to get container status \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9d1494ebc1ab9117602d2f1637a8f6259902a3030748aa58510766450b8103e\": not found" Dec 13 01:13:45.637972 kubelet[2603]: I1213 01:13:45.637873 2603 scope.go:117] "RemoveContainer" containerID="4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82" Dec 13 01:13:45.638076 containerd[1459]: time="2024-12-13T01:13:45.638043516Z" level=error msg="ContainerStatus for \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\": not found" Dec 13 01:13:45.638266 kubelet[2603]: E1213 01:13:45.638215 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\": not found" containerID="4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82" Dec 13 01:13:45.638318 kubelet[2603]: I1213 01:13:45.638255 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82"} err="failed to get container status \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cee9b7b5a5d3b6e09ea1a0644934afacc5dd6ae8b9d11a7f8381b6b9de5ea82\": not found" Dec 13 01:13:45.638318 kubelet[2603]: I1213 01:13:45.638288 2603 scope.go:117] "RemoveContainer" containerID="c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84" Dec 13 01:13:45.638479 containerd[1459]: time="2024-12-13T01:13:45.638450916Z" level=error msg="ContainerStatus for \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\": not found" Dec 13 01:13:45.638633 kubelet[2603]: E1213 01:13:45.638581 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\": not found" containerID="c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84" Dec 13 01:13:45.638633 kubelet[2603]: I1213 01:13:45.638616 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84"} err="failed to get container status \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\": rpc error: code = NotFound desc = an error occurred when try to find container \"c055a28fadeb0c63a2296863cc87eeafcf45c8818ef796bc97a175d9e2887a84\": not found" Dec 13 01:13:45.638633 kubelet[2603]: I1213 01:13:45.638631 2603 scope.go:117] "RemoveContainer" containerID="4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300" Dec 13 01:13:45.638981 containerd[1459]: time="2024-12-13T01:13:45.638938397Z" level=error msg="ContainerStatus for \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\": not found" Dec 13 01:13:45.639135 kubelet[2603]: E1213 01:13:45.639082 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\": not found" containerID="4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300" Dec 13 01:13:45.639135 kubelet[2603]: I1213 01:13:45.639130 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300"} err="failed to get container status \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b55ac04aee7077ad73175478c41b0f0eccbf5b6f46602bf052f53a51f0c1300\": not found" Dec 13 01:13:45.639238 kubelet[2603]: I1213 01:13:45.639142 2603 scope.go:117] "RemoveContainer" containerID="8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28" Dec 13 01:13:45.639489 containerd[1459]: time="2024-12-13T01:13:45.639353762Z" level=error msg="ContainerStatus for \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\": not found" Dec 13 01:13:45.639529 kubelet[2603]: E1213 01:13:45.639448 2603 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\": not found" containerID="8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28" Dec 13 01:13:45.639529 kubelet[2603]: I1213 01:13:45.639466 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28"} err="failed to get container status \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ec18875bfc622300ddb15709fae0b3728dceb2504632bc3f272b1a8ed0afa28\": not found" Dec 13 01:13:45.995107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09-rootfs.mount: Deactivated successfully. Dec 13 01:13:45.995240 systemd[1]: var-lib-kubelet-pods-59351a91\x2d146a\x2d4b1b\x2d8320\x2dffb2ac0f06f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhp494.mount: Deactivated successfully. Dec 13 01:13:45.995326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5de7e1f150c4b578e1dd9f6c2a196131f32a4957afb7b58b51aef9cd96634b09-shm.mount: Deactivated successfully. Dec 13 01:13:45.995402 systemd[1]: var-lib-kubelet-pods-3fbb975e\x2d3cf2\x2d4d15\x2d9c37\x2db76802b6dcae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbccr.mount: Deactivated successfully. Dec 13 01:13:45.995481 systemd[1]: var-lib-kubelet-pods-3fbb975e\x2d3cf2\x2d4d15\x2d9c37\x2db76802b6dcae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:13:45.995566 systemd[1]: var-lib-kubelet-pods-3fbb975e\x2d3cf2\x2d4d15\x2d9c37\x2db76802b6dcae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:13:46.949576 sshd[4266]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:46.964192 systemd[1]: sshd@26-10.0.0.86:22-10.0.0.1:50208.service: Deactivated successfully. Dec 13 01:13:46.966017 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:13:46.967586 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:13:46.968816 systemd[1]: Started sshd@27-10.0.0.86:22-10.0.0.1:50220.service - OpenSSH per-connection server daemon (10.0.0.1:50220). Dec 13 01:13:46.969752 systemd-logind[1448]: Removed session 27. Dec 13 01:13:47.008024 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 50220 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:47.009611 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:47.013569 systemd-logind[1448]: New session 28 of user core. Dec 13 01:13:47.022235 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:13:47.396711 kubelet[2603]: I1213 01:13:47.396577 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" path="/var/lib/kubelet/pods/3fbb975e-3cf2-4d15-9c37-b76802b6dcae/volumes" Dec 13 01:13:47.397619 kubelet[2603]: I1213 01:13:47.397400 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59351a91-146a-4b1b-8320-ffb2ac0f06f7" path="/var/lib/kubelet/pods/59351a91-146a-4b1b-8320-ffb2ac0f06f7/volumes" Dec 13 01:13:47.438404 sshd[4427]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:47.445558 systemd[1]: sshd@27-10.0.0.86:22-10.0.0.1:50220.service: Deactivated successfully. Dec 13 01:13:47.447487 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:13:47.449832 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:13:47.456389 kubelet[2603]: I1213 01:13:47.456341 2603 topology_manager.go:215] "Topology Admit Handler" podUID="e2e4884c-1bb4-4c47-8de9-b8d64482205a" podNamespace="kube-system" podName="cilium-9ttrc" Dec 13 01:13:47.457136 kubelet[2603]: E1213 01:13:47.457078 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="mount-cgroup" Dec 13 01:13:47.457136 kubelet[2603]: E1213 01:13:47.457124 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="mount-bpf-fs" Dec 13 01:13:47.457136 kubelet[2603]: E1213 01:13:47.457132 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="clean-cilium-state" Dec 13 01:13:47.457136 kubelet[2603]: E1213 01:13:47.457139 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="apply-sysctl-overwrites" Dec 13 01:13:47.457136 kubelet[2603]: E1213 01:13:47.457145 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59351a91-146a-4b1b-8320-ffb2ac0f06f7" containerName="cilium-operator" Dec 13 01:13:47.457455 kubelet[2603]: E1213 01:13:47.457152 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="cilium-agent" Dec 13 01:13:47.457647 kubelet[2603]: I1213 01:13:47.457631 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbb975e-3cf2-4d15-9c37-b76802b6dcae" containerName="cilium-agent" Dec 13 01:13:47.457647 kubelet[2603]: I1213 01:13:47.457646 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="59351a91-146a-4b1b-8320-ffb2ac0f06f7" containerName="cilium-operator" Dec 13 01:13:47.462043 systemd[1]: Started sshd@28-10.0.0.86:22-10.0.0.1:50230.service - OpenSSH per-connection server daemon (10.0.0.1:50230). Dec 13 01:13:47.465560 systemd-logind[1448]: Removed session 28. Dec 13 01:13:47.473712 systemd[1]: Created slice kubepods-burstable-pode2e4884c_1bb4_4c47_8de9_b8d64482205a.slice - libcontainer container kubepods-burstable-pode2e4884c_1bb4_4c47_8de9_b8d64482205a.slice. Dec 13 01:13:47.497239 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 50230 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:47.499017 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:47.503760 systemd-logind[1448]: New session 29 of user core. Dec 13 01:13:47.511206 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:13:47.515016 kubelet[2603]: I1213 01:13:47.514968 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-cilium-run\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515106 kubelet[2603]: I1213 01:13:47.515054 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2e4884c-1bb4-4c47-8de9-b8d64482205a-clustermesh-secrets\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515106 kubelet[2603]: I1213 01:13:47.515075 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-host-proc-sys-kernel\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515157 kubelet[2603]: I1213 01:13:47.515107 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-xtables-lock\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515157 kubelet[2603]: I1213 01:13:47.515123 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2e4884c-1bb4-4c47-8de9-b8d64482205a-cilium-ipsec-secrets\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515157 kubelet[2603]: I1213 01:13:47.515138 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzh4c\" (UniqueName: \"kubernetes.io/projected/e2e4884c-1bb4-4c47-8de9-b8d64482205a-kube-api-access-jzh4c\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515157 kubelet[2603]: I1213 01:13:47.515157 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-bpf-maps\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515180 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-cilium-cgroup\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515194 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-cni-path\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515206 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-host-proc-sys-net\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515220 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-hostproc\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515234 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2e4884c-1bb4-4c47-8de9-b8d64482205a-cilium-config-path\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515255 kubelet[2603]: I1213 01:13:47.515249 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2e4884c-1bb4-4c47-8de9-b8d64482205a-hubble-tls\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515379 kubelet[2603]: I1213 01:13:47.515262 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-etc-cni-netd\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.515379 kubelet[2603]: I1213 01:13:47.515277 2603 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2e4884c-1bb4-4c47-8de9-b8d64482205a-lib-modules\") pod \"cilium-9ttrc\" (UID: \"e2e4884c-1bb4-4c47-8de9-b8d64482205a\") " pod="kube-system/cilium-9ttrc" Dec 13 01:13:47.560325 sshd[4441]: pam_unix(sshd:session): session closed for user core Dec 13 01:13:47.569799 systemd[1]: sshd@28-10.0.0.86:22-10.0.0.1:50230.service: Deactivated successfully. Dec 13 01:13:47.571558 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:13:47.573238 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:13:47.584390 systemd[1]: Started sshd@29-10.0.0.86:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). Dec 13 01:13:47.585283 systemd-logind[1448]: Removed session 29. Dec 13 01:13:47.615925 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:13:47.619851 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:13:47.633574 systemd-logind[1448]: New session 30 of user core. Dec 13 01:13:47.640233 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:13:47.777318 kubelet[2603]: E1213 01:13:47.777262 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:47.777962 containerd[1459]: time="2024-12-13T01:13:47.777915922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ttrc,Uid:e2e4884c-1bb4-4c47-8de9-b8d64482205a,Namespace:kube-system,Attempt:0,}" Dec 13 01:13:47.798298 containerd[1459]: time="2024-12-13T01:13:47.797665534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:13:47.798298 containerd[1459]: time="2024-12-13T01:13:47.798221174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:13:47.798298 containerd[1459]: time="2024-12-13T01:13:47.798233968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:47.798519 containerd[1459]: time="2024-12-13T01:13:47.798412375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:13:47.822245 systemd[1]: Started cri-containerd-82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f.scope - libcontainer container 82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f. Dec 13 01:13:47.846068 containerd[1459]: time="2024-12-13T01:13:47.846009080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ttrc,Uid:e2e4884c-1bb4-4c47-8de9-b8d64482205a,Namespace:kube-system,Attempt:0,} returns sandbox id \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\"" Dec 13 01:13:47.847002 kubelet[2603]: E1213 01:13:47.846974 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:47.849785 containerd[1459]: time="2024-12-13T01:13:47.849738527Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:13:47.863876 containerd[1459]: time="2024-12-13T01:13:47.863821510Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8\"" Dec 13 01:13:47.864587 containerd[1459]: time="2024-12-13T01:13:47.864279064Z" level=info msg="StartContainer for \"61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8\"" Dec 13 01:13:47.890281 systemd[1]: Started cri-containerd-61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8.scope - libcontainer container 61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8. Dec 13 01:13:47.915896 containerd[1459]: time="2024-12-13T01:13:47.915852754Z" level=info msg="StartContainer for \"61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8\" returns successfully" Dec 13 01:13:47.925353 systemd[1]: cri-containerd-61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8.scope: Deactivated successfully. Dec 13 01:13:47.957543 containerd[1459]: time="2024-12-13T01:13:47.957472223Z" level=info msg="shim disconnected" id=61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8 namespace=k8s.io Dec 13 01:13:47.957543 containerd[1459]: time="2024-12-13T01:13:47.957540993Z" level=warning msg="cleaning up after shim disconnected" id=61b34e10a26e1871af98082d74df98eba8b778295a6d68d737544af8422f78a8 namespace=k8s.io Dec 13 01:13:47.957543 containerd[1459]: time="2024-12-13T01:13:47.957551452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:48.595532 kubelet[2603]: E1213 01:13:48.595495 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:48.627831 containerd[1459]: time="2024-12-13T01:13:48.627790624Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:13:48.639390 containerd[1459]: time="2024-12-13T01:13:48.639337250Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5\"" Dec 13 01:13:48.639897 containerd[1459]: time="2024-12-13T01:13:48.639862163Z" level=info msg="StartContainer for \"ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5\"" Dec 13 01:13:48.672251 systemd[1]: Started cri-containerd-ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5.scope - libcontainer container ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5. Dec 13 01:13:48.695693 containerd[1459]: time="2024-12-13T01:13:48.695657251Z" level=info msg="StartContainer for \"ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5\" returns successfully" Dec 13 01:13:48.702439 systemd[1]: cri-containerd-ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5.scope: Deactivated successfully. Dec 13 01:13:48.726955 containerd[1459]: time="2024-12-13T01:13:48.726892346Z" level=info msg="shim disconnected" id=ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5 namespace=k8s.io Dec 13 01:13:48.726955 containerd[1459]: time="2024-12-13T01:13:48.726950455Z" level=warning msg="cleaning up after shim disconnected" id=ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5 namespace=k8s.io Dec 13 01:13:48.726955 containerd[1459]: time="2024-12-13T01:13:48.726960144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:49.459471 kubelet[2603]: E1213 01:13:49.459413 2603 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:13:49.600155 kubelet[2603]: E1213 01:13:49.600129 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:49.601960 containerd[1459]: time="2024-12-13T01:13:49.601909481Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:13:49.620077 containerd[1459]: time="2024-12-13T01:13:49.620016246Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4\"" Dec 13 01:13:49.621006 containerd[1459]: time="2024-12-13T01:13:49.620497023Z" level=info msg="StartContainer for \"8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4\"" Dec 13 01:13:49.623382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac82e90dc6bb3d612291b53af7b182adbef5017c83d14c51b60fdfd9a1d30ca5-rootfs.mount: Deactivated successfully. Dec 13 01:13:49.652268 systemd[1]: Started cri-containerd-8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4.scope - libcontainer container 8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4. Dec 13 01:13:49.681690 systemd[1]: cri-containerd-8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4.scope: Deactivated successfully. Dec 13 01:13:49.682371 containerd[1459]: time="2024-12-13T01:13:49.682064257Z" level=info msg="StartContainer for \"8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4\" returns successfully" Dec 13 01:13:49.688240 kubelet[2603]: E1213 01:13:49.687437 2603 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e4884c_1bb4_4c47_8de9_b8d64482205a.slice/cri-containerd-8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4.scope\": RecentStats: unable to find data in memory cache]" Dec 13 01:13:49.701871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4-rootfs.mount: Deactivated successfully. Dec 13 01:13:49.707546 containerd[1459]: time="2024-12-13T01:13:49.707485269Z" level=info msg="shim disconnected" id=8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4 namespace=k8s.io Dec 13 01:13:49.707546 containerd[1459]: time="2024-12-13T01:13:49.707542758Z" level=warning msg="cleaning up after shim disconnected" id=8630ef4b0be9fbe6d7eaf1ba5e3de79029696ae05b96e101e85577d675eedcf4 namespace=k8s.io Dec 13 01:13:49.707662 containerd[1459]: time="2024-12-13T01:13:49.707553909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:50.603802 kubelet[2603]: E1213 01:13:50.603764 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:50.606914 containerd[1459]: time="2024-12-13T01:13:50.606104751Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:13:50.620002 containerd[1459]: time="2024-12-13T01:13:50.619894015Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199\"" Dec 13 01:13:50.620616 containerd[1459]: time="2024-12-13T01:13:50.620458621Z" level=info msg="StartContainer for \"c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199\"" Dec 13 01:13:50.653268 systemd[1]: Started cri-containerd-c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199.scope - libcontainer container c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199. Dec 13 01:13:50.676545 systemd[1]: cri-containerd-c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199.scope: Deactivated successfully. Dec 13 01:13:50.678840 containerd[1459]: time="2024-12-13T01:13:50.678802017Z" level=info msg="StartContainer for \"c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199\" returns successfully" Dec 13 01:13:50.699664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199-rootfs.mount: Deactivated successfully. Dec 13 01:13:50.702848 containerd[1459]: time="2024-12-13T01:13:50.702775987Z" level=info msg="shim disconnected" id=c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199 namespace=k8s.io Dec 13 01:13:50.702848 containerd[1459]: time="2024-12-13T01:13:50.702847232Z" level=warning msg="cleaning up after shim disconnected" id=c06676b8778f168e2f68b10832389eeb2b2ffeac954fe77da8679b976a52e199 namespace=k8s.io Dec 13 01:13:50.703008 containerd[1459]: time="2024-12-13T01:13:50.702856359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:13:51.607539 kubelet[2603]: E1213 01:13:51.607493 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:51.609822 containerd[1459]: time="2024-12-13T01:13:51.609786739Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:13:51.624947 containerd[1459]: time="2024-12-13T01:13:51.624898667Z" level=info msg="CreateContainer within sandbox \"82601870fe9adf19c8ed4a59831632e618ba47ab3cc43db790b54d1ef8fb4c7f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878\"" Dec 13 01:13:51.625595 containerd[1459]: time="2024-12-13T01:13:51.625456049Z" level=info msg="StartContainer for \"b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878\"" Dec 13 01:13:51.658276 systemd[1]: Started cri-containerd-b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878.scope - libcontainer container b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878. Dec 13 01:13:51.689652 containerd[1459]: time="2024-12-13T01:13:51.689617314Z" level=info msg="StartContainer for \"b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878\" returns successfully" Dec 13 01:13:52.100132 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:13:52.217156 kubelet[2603]: I1213 01:13:52.217072 2603 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:13:52Z","lastTransitionTime":"2024-12-13T01:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:13:52.612422 kubelet[2603]: E1213 01:13:52.612377 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:52.624660 kubelet[2603]: I1213 01:13:52.624593 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9ttrc" podStartSLOduration=5.624569675 podStartE2EDuration="5.624569675s" podCreationTimestamp="2024-12-13 01:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:13:52.62420221 +0000 UTC m=+93.317432191" watchObservedRunningTime="2024-12-13 01:13:52.624569675 +0000 UTC m=+93.317799636" Dec 13 01:13:53.394760 kubelet[2603]: E1213 01:13:53.394719 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:53.779420 kubelet[2603]: E1213 01:13:53.778711 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:53.901857 systemd[1]: run-containerd-runc-k8s.io-b868166fd16473c6b3a40f84c8bad7924bdeafbfb4a15937052011116f21c878-runc.3pvE4N.mount: Deactivated successfully. Dec 13 01:13:55.106861 systemd-networkd[1396]: lxc_health: Link UP Dec 13 01:13:55.118625 systemd-networkd[1396]: lxc_health: Gained carrier Dec 13 01:13:55.398194 kubelet[2603]: E1213 01:13:55.395377 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:55.780313 kubelet[2603]: E1213 01:13:55.780012 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:56.619983 kubelet[2603]: E1213 01:13:56.619949 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:13:56.639333 systemd-networkd[1396]: lxc_health: Gained IPv6LL Dec 13 01:13:57.620953 kubelet[2603]: E1213 01:13:57.620908 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:14:00.243418 sshd[4451]: pam_unix(sshd:session): session closed for user core Dec 13 01:14:00.247432 systemd[1]: sshd@29-10.0.0.86:22-10.0.0.1:50246.service: Deactivated successfully. Dec 13 01:14:00.249877 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:14:00.250591 systemd-logind[1448]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:14:00.251476 systemd-logind[1448]: Removed session 30.