Dec 13 01:16:25.867622 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:25.867642 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:25.867653 kernel: BIOS-provided physical RAM map: Dec 13 01:16:25.867659 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:16:25.867666 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:16:25.867672 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:16:25.867679 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:16:25.867685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:16:25.867691 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:16:25.867700 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:16:25.867706 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:16:25.867712 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:16:25.867718 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:16:25.867724 kernel: NX (Execute Disable) protection: active Dec 13 01:16:25.867732 kernel: APIC: Static calls initialized Dec 13 01:16:25.867741 kernel: SMBIOS 2.8 present. Dec 13 01:16:25.867748 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:16:25.867754 kernel: Hypervisor detected: KVM Dec 13 01:16:25.867761 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:25.867768 kernel: kvm-clock: using sched offset of 2221240790 cycles Dec 13 01:16:25.867774 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:25.867782 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:16:25.867789 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:25.867796 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:25.867803 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:16:25.867812 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:16:25.867819 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:25.867826 kernel: Using GB pages for direct mapping Dec 13 01:16:25.867832 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:25.867839 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:16:25.867846 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867853 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867860 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867869 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:16:25.867876 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867883 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867890 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867897 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:16:25.867903 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:16:25.867910 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:16:25.867921 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:16:25.867930 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:16:25.867937 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:16:25.867944 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:16:25.867951 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:16:25.867958 kernel: No NUMA configuration found Dec 13 01:16:25.867965 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:16:25.867972 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:16:25.867989 kernel: Zone ranges: Dec 13 01:16:25.867997 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:25.868004 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:16:25.868011 kernel: Normal empty Dec 13 01:16:25.868018 kernel: Movable zone start for each node Dec 13 01:16:25.868025 kernel: Early memory node ranges Dec 13 01:16:25.868032 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:16:25.868039 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:16:25.868046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:16:25.868056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:25.868063 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:16:25.868070 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:16:25.868077 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:16:25.868084 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:25.868091 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:25.868098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:16:25.868106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:25.868113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:25.868122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:25.868129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:25.868136 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:25.868143 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:16:25.868166 kernel: TSC deadline timer available Dec 13 01:16:25.868173 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:16:25.868180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:16:25.868188 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:16:25.868195 kernel: kvm-guest: setup PV sched yield Dec 13 01:16:25.868205 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:16:25.868212 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:25.868219 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:25.868226 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:16:25.868233 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:16:25.868241 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:16:25.868247 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:16:25.868254 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:25.868261 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:25.868270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:25.868279 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:25.868287 kernel: random: crng init done Dec 13 01:16:25.868294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:25.868301 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:16:25.868308 kernel: Fallback order for Node 0: 0 Dec 13 01:16:25.868315 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:16:25.868322 kernel: Policy zone: DMA32 Dec 13 01:16:25.868329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:25.868339 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:16:25.868346 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:16:25.868353 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:25.868360 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:25.868367 kernel: Dynamic Preempt: voluntary Dec 13 01:16:25.868375 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:25.868382 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:25.868390 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:16:25.868397 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:25.868407 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:25.868414 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:25.868421 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:25.868428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:16:25.868435 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:16:25.868442 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:25.868449 kernel: Console: colour VGA+ 80x25 Dec 13 01:16:25.868457 kernel: printk: console [ttyS0] enabled Dec 13 01:16:25.868464 kernel: ACPI: Core revision 20230628 Dec 13 01:16:25.868473 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:16:25.868481 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:25.868488 kernel: x2apic enabled Dec 13 01:16:25.868495 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:25.868502 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:16:25.868509 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:16:25.868517 kernel: kvm-guest: setup PV IPIs Dec 13 01:16:25.868533 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:16:25.868540 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:16:25.868548 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:16:25.868555 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:16:25.868562 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:16:25.868572 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:16:25.868579 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:25.868587 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:16:25.868594 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:25.868604 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:25.868611 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:16:25.868619 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:16:25.868626 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:25.868634 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:25.868641 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:16:25.868649 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:16:25.868657 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:16:25.868665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:25.868674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:25.868682 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:25.868689 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:25.868697 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:16:25.868704 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:25.868712 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:25.868719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:25.868727 kernel: landlock: Up and running. Dec 13 01:16:25.868734 kernel: SELinux: Initializing. Dec 13 01:16:25.868744 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:25.868751 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:16:25.868759 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:16:25.868767 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:25.868774 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:25.868782 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:16:25.868789 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:16:25.868797 kernel: ... version: 0 Dec 13 01:16:25.868806 kernel: ... bit width: 48 Dec 13 01:16:25.868814 kernel: ... generic registers: 6 Dec 13 01:16:25.868822 kernel: ... value mask: 0000ffffffffffff Dec 13 01:16:25.868831 kernel: ... max period: 00007fffffffffff Dec 13 01:16:25.868840 kernel: ... fixed-purpose events: 0 Dec 13 01:16:25.868849 kernel: ... event mask: 000000000000003f Dec 13 01:16:25.868856 kernel: signal: max sigframe size: 1776 Dec 13 01:16:25.868863 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:25.868871 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:25.868878 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:25.868888 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:25.868895 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:16:25.868903 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:16:25.868910 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:25.868917 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:16:25.868925 kernel: devtmpfs: initialized Dec 13 01:16:25.868932 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:25.868940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:25.868947 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:16:25.868957 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:25.868964 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:25.868972 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:25.868985 kernel: audit: type=2000 audit(1734052585.474:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:25.868992 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:25.868999 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:25.869007 kernel: cpuidle: using governor menu Dec 13 01:16:25.869014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:25.869022 kernel: dca service started, version 1.12.1 Dec 13 01:16:25.869032 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:16:25.869039 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:16:25.869047 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:25.869054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:25.869062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:25.869069 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:25.869077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:25.869084 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:25.869092 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:25.869101 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:25.869109 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:25.869116 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:25.869123 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:16:25.869131 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:25.869138 kernel: ACPI: Interpreter enabled Dec 13 01:16:25.869156 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:25.869164 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:25.869171 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:25.869182 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:16:25.869189 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:16:25.869196 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:25.869368 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:25.869498 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:16:25.869618 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:16:25.869628 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:25.869761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:25.869874 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:25.869993 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:25.870105 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:16:25.870232 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:16:25.870343 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:16:25.870455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:25.870596 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:16:25.870732 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:16:25.870855 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:16:25.870985 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:16:25.871109 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:16:25.871352 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:16:25.871487 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:25.871609 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:16:25.871730 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:16:25.871853 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:16:25.871993 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:16:25.872115 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:16:25.872257 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:16:25.872381 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:16:25.872511 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:25.872633 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:16:25.872752 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:16:25.872871 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:16:25.873000 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:16:25.873128 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:16:25.873281 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:16:25.873408 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:16:25.873526 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:16:25.873644 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:16:25.873768 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:16:25.873888 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:16:25.873898 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:25.873910 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:25.873917 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:25.873925 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:25.873932 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:16:25.873940 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:16:25.873947 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:16:25.873955 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:16:25.873962 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:16:25.873972 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:16:25.873990 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:16:25.873998 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:16:25.874005 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:16:25.874013 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:16:25.874021 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:16:25.874029 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:16:25.874036 kernel: iommu: Default domain type: Translated Dec 13 01:16:25.874044 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:25.874051 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:25.874061 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:25.874069 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:16:25.874076 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:16:25.874229 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:16:25.874349 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:16:25.874466 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:16:25.874476 kernel: vgaarb: loaded Dec 13 01:16:25.874484 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:16:25.874495 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:16:25.874503 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:25.874511 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:25.874518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:25.874526 kernel: pnp: PnP ACPI init Dec 13 01:16:25.874656 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:16:25.874667 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:16:25.874675 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:25.874686 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:25.874693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:25.874701 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:16:25.874709 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:25.874717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:16:25.874724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:16:25.874732 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:16:25.874739 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:25.874747 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:16:25.874757 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:25.874764 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:25.874874 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:25.874992 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:25.875105 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:25.875291 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:16:25.875401 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:16:25.875508 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:16:25.875522 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:25.875530 kernel: Initialise system trusted keyrings Dec 13 01:16:25.875538 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:16:25.875546 kernel: Key type asymmetric registered Dec 13 01:16:25.875553 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:25.875561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:25.875568 kernel: io scheduler mq-deadline registered Dec 13 01:16:25.875576 kernel: io scheduler kyber registered Dec 13 01:16:25.875583 kernel: io scheduler bfq registered Dec 13 01:16:25.875593 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:25.875601 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:16:25.875608 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:16:25.875616 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:16:25.875623 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:25.875631 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:25.875639 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:25.875646 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:25.875654 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:25.875785 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:16:25.875796 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:25.875909 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:16:25.876031 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:16:25 UTC (1734052585) Dec 13 01:16:25.876144 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:16:25.876167 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:16:25.876175 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:25.876182 kernel: Segment Routing with IPv6 Dec 13 01:16:25.876194 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:25.876202 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:25.876209 kernel: Key type dns_resolver registered Dec 13 01:16:25.876216 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:25.876224 kernel: sched_clock: Marking stable (579002972, 106200506)->(699885205, -14681727) Dec 13 01:16:25.876231 kernel: registered taskstats version 1 Dec 13 01:16:25.876239 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:25.876246 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:25.876254 kernel: Key type .fscrypt registered Dec 13 01:16:25.876263 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:25.876271 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:16:25.876278 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:25.876286 kernel: ima: No architecture policies found Dec 13 01:16:25.876293 kernel: clk: Disabling unused clocks Dec 13 01:16:25.876300 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:25.876308 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:25.876315 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:25.876323 kernel: Run /init as init process Dec 13 01:16:25.876332 kernel: with arguments: Dec 13 01:16:25.876340 kernel: /init Dec 13 01:16:25.876347 kernel: with environment: Dec 13 01:16:25.876354 kernel: HOME=/ Dec 13 01:16:25.876362 kernel: TERM=linux Dec 13 01:16:25.876369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:25.876378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:25.876388 systemd[1]: Detected virtualization kvm. Dec 13 01:16:25.876398 systemd[1]: Detected architecture x86-64. Dec 13 01:16:25.876406 systemd[1]: Running in initrd. Dec 13 01:16:25.876414 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:25.876422 systemd[1]: Hostname set to . Dec 13 01:16:25.876430 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:25.876438 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:25.876446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:25.876454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:25.876465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:25.876483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:25.876494 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:25.876502 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:25.876512 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:25.876522 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:25.876531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:25.876539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:25.876547 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:25.876555 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:25.876563 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:25.876571 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:25.876579 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:25.876590 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:25.876598 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:25.876606 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:25.876614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:25.876623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:25.876633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:25.876641 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:25.876650 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:25.876660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:25.876668 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:25.876677 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:25.876685 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:25.876693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:25.876701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:25.876709 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:25.876718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:25.876726 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:25.876737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:25.876762 systemd-journald[191]: Collecting audit messages is disabled. Dec 13 01:16:25.876783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:25.876792 systemd-journald[191]: Journal started Dec 13 01:16:25.876812 systemd-journald[191]: Runtime Journal (/run/log/journal/a8f4fe1e278f4730bfa80d8e07d823f1) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:16:25.882175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:25.882644 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:16:25.911234 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:25.911249 kernel: Bridge firewalling registered Dec 13 01:16:25.910949 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:16:25.920271 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:25.921489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:25.922099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:25.940292 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:25.941247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:25.942253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:25.942704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:25.957446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:25.957907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:25.965416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:25.967841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:25.971484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:25.986845 dracut-cmdline[230]: dracut-dracut-053 Dec 13 01:16:25.989806 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:25.994859 systemd-resolved[226]: Positive Trust Anchors: Dec 13 01:16:25.994880 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:25.994910 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:25.997369 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 13 01:16:25.998414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:26.004539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:26.077182 kernel: SCSI subsystem initialized Dec 13 01:16:26.086178 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:26.097180 kernel: iscsi: registered transport (tcp) Dec 13 01:16:26.118197 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:26.118267 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:26.168395 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:26.184270 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:26.209549 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:26.209584 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:26.210574 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:26.251173 kernel: raid6: avx2x4 gen() 30700 MB/s Dec 13 01:16:26.268169 kernel: raid6: avx2x2 gen() 31306 MB/s Dec 13 01:16:26.285242 kernel: raid6: avx2x1 gen() 26129 MB/s Dec 13 01:16:26.285260 kernel: raid6: using algorithm avx2x2 gen() 31306 MB/s Dec 13 01:16:26.303240 kernel: raid6: .... xor() 19980 MB/s, rmw enabled Dec 13 01:16:26.303258 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:26.323172 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:26.477174 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:26.490783 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:26.505313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:26.516432 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:16:26.520961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:26.531323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:26.546663 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Dec 13 01:16:26.578951 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:26.583340 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:26.643776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:26.653323 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:26.664438 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:26.667767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:26.670370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:26.672893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:26.679853 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:16:26.707430 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:16:26.707576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:26.707588 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:26.707599 kernel: GPT:9289727 != 19775487 Dec 13 01:16:26.707609 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:26.707619 kernel: GPT:9289727 != 19775487 Dec 13 01:16:26.707629 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:26.707643 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:26.707654 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:26.707665 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:26.685726 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:26.701016 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:26.709223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:26.709278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:26.713237 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:26.715243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:26.722612 kernel: libata version 3.00 loaded. Dec 13 01:16:26.715346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:26.718017 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:26.733596 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Dec 13 01:16:26.733622 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:16:26.752635 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:16:26.752656 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:16:26.752813 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (475) Dec 13 01:16:26.752825 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:16:26.752985 kernel: scsi host0: ahci Dec 13 01:16:26.753141 kernel: scsi host1: ahci Dec 13 01:16:26.753469 kernel: scsi host2: ahci Dec 13 01:16:26.753629 kernel: scsi host3: ahci Dec 13 01:16:26.753820 kernel: scsi host4: ahci Dec 13 01:16:26.753984 kernel: scsi host5: ahci Dec 13 01:16:26.754245 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:16:26.754260 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:16:26.754271 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:16:26.754281 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:16:26.754291 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:16:26.754351 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:16:26.730273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:26.747434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:16:26.760987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:16:26.791508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:26.797485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:16:26.800035 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:16:26.806286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:26.818268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:26.820076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:26.827788 disk-uuid[557]: Primary Header is updated. Dec 13 01:16:26.827788 disk-uuid[557]: Secondary Entries is updated. Dec 13 01:16:26.827788 disk-uuid[557]: Secondary Header is updated. Dec 13 01:16:26.831176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:26.835186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:26.842623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:27.060414 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:27.060461 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:27.061216 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:27.061277 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:16:27.062178 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:27.063175 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:16:27.064181 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:16:27.064193 kernel: ata3.00: applying bridge limits Dec 13 01:16:27.065183 kernel: ata3.00: configured for UDMA/100 Dec 13 01:16:27.066180 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:16:27.120682 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:16:27.132777 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:16:27.133213 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:16:27.836805 disk-uuid[558]: The operation has completed successfully. Dec 13 01:16:27.838290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:16:27.862627 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:27.862758 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:27.890303 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:27.893379 sh[591]: Success Dec 13 01:16:27.906191 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:16:27.937224 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:27.950604 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:27.955521 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:27.964855 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:27.964886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:27.964897 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:27.966609 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:27.966623 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:27.971177 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:27.973480 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:27.981282 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:27.983788 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:27.991445 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:27.991475 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:27.991488 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:27.994189 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:28.003280 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:28.004848 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:28.013891 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:28.020369 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:28.071583 ignition[683]: Ignition 2.19.0 Dec 13 01:16:28.071596 ignition[683]: Stage: fetch-offline Dec 13 01:16:28.071631 ignition[683]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:28.071641 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:28.071729 ignition[683]: parsed url from cmdline: "" Dec 13 01:16:28.071733 ignition[683]: no config URL provided Dec 13 01:16:28.071739 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:28.071748 ignition[683]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:28.071774 ignition[683]: op(1): [started] loading QEMU firmware config module Dec 13 01:16:28.071779 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:16:28.082316 ignition[683]: op(1): [finished] loading QEMU firmware config module Dec 13 01:16:28.103760 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:28.116311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:28.125787 ignition[683]: parsing config with SHA512: 68d87e1cd2dfaa9de2fb67ab219f6b1c41ed6fcfdd1bfc4420c1b99139695468625623b7acf1fa6450f8ca0d5a962bf123d0d262a324b63617d2e9f269228979 Dec 13 01:16:28.129441 unknown[683]: fetched base config from "system" Dec 13 01:16:28.129452 unknown[683]: fetched user config from "qemu" Dec 13 01:16:28.129818 ignition[683]: fetch-offline: fetch-offline passed Dec 13 01:16:28.132513 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:28.129877 ignition[683]: Ignition finished successfully Dec 13 01:16:28.136316 systemd-networkd[780]: lo: Link UP Dec 13 01:16:28.136320 systemd-networkd[780]: lo: Gained carrier Dec 13 01:16:28.137836 systemd-networkd[780]: Enumeration completed Dec 13 01:16:28.137906 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:28.138230 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:28.138233 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:28.138905 systemd-networkd[780]: eth0: Link UP Dec 13 01:16:28.138908 systemd-networkd[780]: eth0: Gained carrier Dec 13 01:16:28.138914 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:28.139949 systemd[1]: Reached target network.target - Network. Dec 13 01:16:28.141525 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:16:28.153218 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:28.153294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:28.164962 ignition[783]: Ignition 2.19.0 Dec 13 01:16:28.164977 ignition[783]: Stage: kargs Dec 13 01:16:28.165137 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:28.165164 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:28.165934 ignition[783]: kargs: kargs passed Dec 13 01:16:28.165991 ignition[783]: Ignition finished successfully Dec 13 01:16:28.169078 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:28.180303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:28.192373 ignition[792]: Ignition 2.19.0 Dec 13 01:16:28.192384 ignition[792]: Stage: disks Dec 13 01:16:28.192534 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:28.192547 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:28.193327 ignition[792]: disks: disks passed Dec 13 01:16:28.193372 ignition[792]: Ignition finished successfully Dec 13 01:16:28.199207 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:28.201353 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:28.201801 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:28.202163 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:28.202658 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:28.202991 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:28.220271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:28.232167 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:16:28.238636 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:28.248236 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:28.332169 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:28.332480 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:28.333987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:28.349213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:28.350867 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:28.352070 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:28.352107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:28.361575 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Dec 13 01:16:28.361592 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:28.361603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:28.352127 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:28.364165 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:28.358906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:28.364235 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:28.367876 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:28.368979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:28.399373 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:28.404406 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:28.408180 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:28.412745 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:28.493962 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:28.501382 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:28.505309 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:28.509168 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:28.527689 ignition[924]: INFO : Ignition 2.19.0 Dec 13 01:16:28.527689 ignition[924]: INFO : Stage: mount Dec 13 01:16:28.530551 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:28.530551 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:28.530551 ignition[924]: INFO : mount: mount passed Dec 13 01:16:28.530551 ignition[924]: INFO : Ignition finished successfully Dec 13 01:16:28.529255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:28.530794 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:28.538373 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:28.964323 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:28.981281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:28.987172 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Dec 13 01:16:28.989326 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:28.989340 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:28.989351 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:16:28.992171 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:16:28.993634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:29.012134 ignition[956]: INFO : Ignition 2.19.0 Dec 13 01:16:29.012134 ignition[956]: INFO : Stage: files Dec 13 01:16:29.014101 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:29.014101 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:29.014101 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:29.017704 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:29.017704 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:29.017704 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:29.017704 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:29.017704 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:29.017316 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 01:16:29.025597 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:29.025597 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:29.057681 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:16:29.139904 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:29.139904 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:29.143812 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:29.605501 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:29.722213 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:29.724090 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:16:29.797356 systemd-networkd[780]: eth0: Gained IPv6LL Dec 13 01:16:30.147311 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:30.503557 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:16:30.503557 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 01:16:30.507155 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:30.528055 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:30.532339 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:16:30.533908 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:16:30.533908 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:30.533908 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:30.533908 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:30.533908 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:30.533908 ignition[956]: INFO : files: files passed Dec 13 01:16:30.533908 ignition[956]: INFO : Ignition finished successfully Dec 13 01:16:30.535136 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:30.549269 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:30.550887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:30.552753 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:30.552875 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:30.560414 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:16:30.563024 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:30.563024 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:30.566055 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:30.569448 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:30.572433 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:30.584298 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:30.605808 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:30.605940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:30.606778 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:30.609516 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:30.609876 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:30.610615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:30.628551 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:30.635275 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:30.644947 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:30.646232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:30.648435 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:30.650431 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:30.650542 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:30.652685 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:30.654446 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:30.656486 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:30.658501 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:30.660504 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:30.662659 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:30.664767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:30.667023 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:30.669011 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:30.671301 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:30.673081 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:30.673203 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:30.675447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:30.676923 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:30.678939 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:30.679045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:30.681109 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:30.681238 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:30.683360 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:30.683466 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:30.685444 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:30.687135 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:30.692199 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:30.693836 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:30.695725 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:30.697653 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:30.697744 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:30.700013 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:30.700109 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:30.701845 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:30.701984 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:30.703868 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:30.703983 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:30.717284 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:30.718776 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:30.719960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:30.720073 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:30.722145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:30.722347 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:30.727968 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:30.728072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:30.731102 ignition[1010]: INFO : Ignition 2.19.0 Dec 13 01:16:30.731102 ignition[1010]: INFO : Stage: umount Dec 13 01:16:30.731102 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:30.731102 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:16:30.731102 ignition[1010]: INFO : umount: umount passed Dec 13 01:16:30.731102 ignition[1010]: INFO : Ignition finished successfully Dec 13 01:16:30.732084 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:30.732220 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:30.732674 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:30.736337 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:30.736388 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:30.736874 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:30.736923 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:30.739142 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:30.739199 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:30.739461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:30.739503 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:30.742751 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:30.744541 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:30.748189 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 13 01:16:30.750923 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:30.751083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:30.754545 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:30.754681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:30.756956 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:30.757012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:30.763251 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:30.764433 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:30.764484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:30.767018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:30.767068 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:30.769089 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:30.769134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:30.771227 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:30.771273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:30.773726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:30.777572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:30.786306 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:30.786427 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:30.790910 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:30.791097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:30.793300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:30.793345 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:30.795374 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:30.795412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:30.797331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:30.797380 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:30.799633 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:30.799679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:30.801608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:30.801656 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:30.811304 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:30.811533 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:30.811585 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:30.814014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:16:30.814060 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:30.814410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:30.814453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:30.818932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:30.818978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:30.835467 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:30.835580 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:30.980293 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:30.980446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:30.983085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:30.984547 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:30.984618 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:31.002300 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:31.008677 systemd[1]: Switching root. Dec 13 01:16:31.037604 systemd-journald[191]: Journal stopped Dec 13 01:16:32.147981 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:32.148042 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:32.148063 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:32.148080 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:32.148091 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:32.148102 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:32.148116 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:32.148127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:32.148138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:32.148169 kernel: audit: type=1403 audit(1734052591.449:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:32.148187 systemd[1]: Successfully loaded SELinux policy in 41.188ms. Dec 13 01:16:32.148208 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.366ms. Dec 13 01:16:32.148221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:32.148234 systemd[1]: Detected virtualization kvm. Dec 13 01:16:32.148246 systemd[1]: Detected architecture x86-64. Dec 13 01:16:32.148258 systemd[1]: Detected first boot. Dec 13 01:16:32.148269 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:16:32.148281 zram_generator::config[1056]: No configuration found. Dec 13 01:16:32.148294 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:32.148313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:16:32.148325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:16:32.148337 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:32.148354 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:32.148366 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:32.148379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:32.148390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:32.148402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:32.148414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:32.148429 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:32.148445 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:32.148457 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:32.148469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:32.148482 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:32.148494 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:32.148507 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:32.148519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:32.148534 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:16:32.148546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:32.148558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:16:32.148570 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:16:32.148582 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:32.148594 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:32.148605 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:32.148617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:32.148632 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:32.148643 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:32.148655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:32.148667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:32.148679 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:32.148691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:32.148703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:32.148715 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:32.148727 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:32.148742 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:32.148753 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:32.148767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.148779 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:32.148791 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:32.148802 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:32.148815 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:32.148827 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:32.148838 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:32.148852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:32.148864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:32.148884 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:32.148897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:32.148908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:32.148920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:32.148931 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:32.148943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:32.148957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:32.148969 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:16:32.148981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:16:32.148993 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:16:32.149005 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:16:32.149017 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:32.149030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:32.149041 kernel: fuse: init (API version 7.39) Dec 13 01:16:32.149053 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:32.149066 kernel: loop: module loaded Dec 13 01:16:32.149078 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:32.149090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:32.149102 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:16:32.149131 systemd-journald[1126]: Collecting audit messages is disabled. Dec 13 01:16:32.149167 systemd[1]: Stopped verity-setup.service. Dec 13 01:16:32.149179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.149194 systemd-journald[1126]: Journal started Dec 13 01:16:32.149214 systemd-journald[1126]: Runtime Journal (/run/log/journal/a8f4fe1e278f4730bfa80d8e07d823f1) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:16:31.935340 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:31.954218 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:16:31.954646 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:16:32.153166 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:32.154346 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:32.155496 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:32.156686 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:32.157769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:32.158926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:32.160108 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:32.161341 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:32.162737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:32.164259 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:32.164419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:32.165851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:32.166022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:32.167430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:32.167599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:32.169066 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:32.169245 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:32.170566 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:32.170728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:32.172063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:32.173422 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:32.174899 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:32.182175 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:32.182655 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:32.182838 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:32.189965 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:32.198227 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:32.200490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:32.201635 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:32.201667 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:32.203647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:32.205907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:32.208017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:32.209175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:32.214496 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:32.217383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:32.218708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:32.222441 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:32.223722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:32.226368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:32.231566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:32.237455 systemd-journald[1126]: Time spent on flushing to /var/log/journal/a8f4fe1e278f4730bfa80d8e07d823f1 is 14.399ms for 955 entries. Dec 13 01:16:32.237455 systemd-journald[1126]: System Journal (/var/log/journal/a8f4fe1e278f4730bfa80d8e07d823f1) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:16:32.261695 systemd-journald[1126]: Received client request to flush runtime journal. Dec 13 01:16:32.261726 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:16:32.236319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:32.240647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:32.244359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:32.245807 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:32.247271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:32.248759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:32.255825 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:32.265703 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:32.272259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:32.274053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:32.276451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:32.287202 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:16:32.288746 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:16:32.288763 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Dec 13 01:16:32.290193 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:32.292306 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:32.292970 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:32.296474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:32.307915 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:32.316206 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:16:32.334609 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:32.342327 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:32.350170 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:16:32.362181 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:16:32.362202 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 01:16:32.368087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:32.380168 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:16:32.390178 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:16:32.398176 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:16:32.405649 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:16:32.406250 (sd-merge)[1198]: Merged extensions into '/usr'. Dec 13 01:16:32.410081 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:32.410094 systemd[1]: Reloading... Dec 13 01:16:32.474603 zram_generator::config[1230]: No configuration found. Dec 13 01:16:32.539217 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:32.586437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:32.635125 systemd[1]: Reloading finished in 224 ms. Dec 13 01:16:32.682622 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:32.684128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:32.697377 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:32.699781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:32.706757 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:32.706782 systemd[1]: Reloading... Dec 13 01:16:32.721360 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:32.722008 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:32.723088 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:32.723455 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 01:16:32.723603 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 01:16:32.726949 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:32.727020 systemd-tmpfiles[1262]: Skipping /boot Dec 13 01:16:32.737528 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:32.737587 systemd-tmpfiles[1262]: Skipping /boot Dec 13 01:16:32.770167 zram_generator::config[1292]: No configuration found. Dec 13 01:16:32.871841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:32.920093 systemd[1]: Reloading finished in 212 ms. Dec 13 01:16:32.939620 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:32.951572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:32.973383 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:32.975854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:32.978192 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:32.983244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:32.985999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:32.991368 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:32.995372 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:32.995538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:32.999120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:33.002079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:33.005963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:33.007725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:33.011201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:33.012330 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:33.013326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:33.014434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:33.016141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:33.016464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:33.018560 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:33.028138 augenrules[1353]: No rules Dec 13 01:16:33.028360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:33.030034 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:33.030956 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Dec 13 01:16:33.031629 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:33.044480 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:33.048131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:33.048332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:33.061594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:33.064704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:33.068459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:33.069597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:33.073063 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:33.074211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:33.077832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:33.079616 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:33.081312 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:33.083085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:33.083373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:33.088446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:33.088691 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:33.090411 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:33.090579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:33.097217 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:33.114418 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Dec 13 01:16:33.114496 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Dec 13 01:16:33.113749 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:16:33.116410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:33.116592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:33.125123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:33.127877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:33.132193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1381) Dec 13 01:16:33.142330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:33.147338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:33.148567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:33.153337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:33.154466 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:33.154500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:33.155232 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:33.156448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:33.156634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:33.158138 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:33.158346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:33.160470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:33.160651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:33.174741 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:33.174959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:33.184702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:33.184766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:33.190306 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:16:33.194181 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:16:33.194198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:16:33.199956 systemd-resolved[1338]: Positive Trust Anchors: Dec 13 01:16:33.203267 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:16:33.223340 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:16:33.223357 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:16:33.232237 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:16:33.236594 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:16:33.200193 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:33.200226 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:33.204025 systemd-resolved[1338]: Defaulting to hostname 'linux'. Dec 13 01:16:33.213337 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:33.214669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:33.216018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:33.229739 systemd-networkd[1403]: lo: Link UP Dec 13 01:16:33.229744 systemd-networkd[1403]: lo: Gained carrier Dec 13 01:16:33.232257 systemd-networkd[1403]: Enumeration completed Dec 13 01:16:33.232340 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:33.232651 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:33.232655 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:33.233881 systemd-networkd[1403]: eth0: Link UP Dec 13 01:16:33.233885 systemd-networkd[1403]: eth0: Gained carrier Dec 13 01:16:33.233897 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:33.234057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:33.235581 systemd[1]: Reached target network.target - Network. Dec 13 01:16:33.247268 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:33.251061 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.143/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:16:33.287953 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:16:33.289424 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:33.847408 systemd-resolved[1338]: Clock change detected. Flushing caches. Dec 13 01:16:33.847567 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:16:33.848188 systemd-timesyncd[1413]: Initial clock synchronization to Fri 2024-12-13 01:16:33.847343 UTC. Dec 13 01:16:33.877435 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:16:33.891723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:33.903374 kernel: kvm_amd: TSC scaling supported Dec 13 01:16:33.903400 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:16:33.903413 kernel: kvm_amd: Nested Paging enabled Dec 13 01:16:33.904633 kernel: kvm_amd: LBR virtualization supported Dec 13 01:16:33.907228 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:16:33.907337 kernel: kvm_amd: Virtual GIF supported Dec 13 01:16:33.926199 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:16:33.959595 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:33.983798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:33.995321 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:34.004381 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:34.037195 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:34.038783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:34.039936 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:34.041160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:34.042439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:34.044018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:34.045240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:34.046495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:34.047740 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:34.047766 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:34.048706 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:34.050492 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:34.053209 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:34.065510 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:34.067848 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:34.069364 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:34.070486 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:34.071427 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:34.072384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:34.072412 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:34.073336 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:34.075353 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:34.080195 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:34.080620 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:34.083096 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:34.084171 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:34.086020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:34.088162 jq[1440]: false Dec 13 01:16:34.088299 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:34.094339 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:34.097002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:34.101375 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:34.104658 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:16:34.105090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:34.105721 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:34.108916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:34.110800 extend-filesystems[1441]: Found loop3 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found loop4 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found loop5 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found sr0 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda1 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda2 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda3 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found usr Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda4 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda6 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda7 Dec 13 01:16:34.110800 extend-filesystems[1441]: Found vda9 Dec 13 01:16:34.110800 extend-filesystems[1441]: Checking size of /dev/vda9 Dec 13 01:16:34.111251 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:34.115631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:34.115852 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:34.118297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:34.118487 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:34.132448 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:34.133042 dbus-daemon[1439]: [system] SELinux support is enabled Dec 13 01:16:34.133738 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:34.135870 jq[1452]: true Dec 13 01:16:34.135319 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:34.137239 extend-filesystems[1441]: Resized partition /dev/vda9 Dec 13 01:16:34.138276 update_engine[1451]: I20241213 01:16:34.137148 1451 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:34.139616 update_engine[1451]: I20241213 01:16:34.139461 1451 update_check_scheduler.cc:74] Next update check in 3m32s Dec 13 01:16:34.143913 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:34.145552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:34.145586 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:34.147231 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:16:34.150821 jq[1470]: true Dec 13 01:16:34.151741 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:34.151773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:34.152943 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 01:16:34.160455 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:34.166742 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:34.170213 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:16:34.173635 tar[1457]: linux-amd64/helm Dec 13 01:16:34.177752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:34.186204 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:16:34.208827 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:16:34.209311 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:16:34.210536 systemd-logind[1449]: New seat seat0. Dec 13 01:16:34.210916 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:16:34.210916 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:16:34.210916 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:16:34.228448 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Dec 13 01:16:34.233375 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:34.214229 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:34.219425 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:34.220920 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:34.245577 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:34.251321 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:34.252537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:34.254533 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:34.265153 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:34.270208 systemd[1]: Started sshd@0-10.0.0.143:22-10.0.0.1:46452.service - OpenSSH per-connection server daemon (10.0.0.1:46452). Dec 13 01:16:34.273618 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:16:34.277873 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:34.278756 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:34.291520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:34.304782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:34.319681 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:34.322514 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:16:34.324373 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:34.342754 sshd[1512]: Accepted publickey for core from 10.0.0.1 port 46452 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:34.344905 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:34.356564 systemd-logind[1449]: New session 1 of user core. Dec 13 01:16:34.358235 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:16:34.374410 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:16:34.387930 containerd[1473]: time="2024-12-13T01:16:34.387844507Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:34.389928 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:16:34.398414 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:16:34.403659 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:16:34.410079 containerd[1473]: time="2024-12-13T01:16:34.410037286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.411760 containerd[1473]: time="2024-12-13T01:16:34.411715423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.411760 containerd[1473]: time="2024-12-13T01:16:34.411747012Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:34.411760 containerd[1473]: time="2024-12-13T01:16:34.411762261Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.411980 containerd[1473]: time="2024-12-13T01:16:34.411960202Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:34.412017 containerd[1473]: time="2024-12-13T01:16:34.411980550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412065 containerd[1473]: time="2024-12-13T01:16:34.412047726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412086 containerd[1473]: time="2024-12-13T01:16:34.412062865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412282 containerd[1473]: time="2024-12-13T01:16:34.412260646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412282 containerd[1473]: time="2024-12-13T01:16:34.412279671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412327 containerd[1473]: time="2024-12-13T01:16:34.412293607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412327 containerd[1473]: time="2024-12-13T01:16:34.412304207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412417 containerd[1473]: time="2024-12-13T01:16:34.412399506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412656 containerd[1473]: time="2024-12-13T01:16:34.412629898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412776 containerd[1473]: time="2024-12-13T01:16:34.412756185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:34.412776 containerd[1473]: time="2024-12-13T01:16:34.412772516Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:34.412893 containerd[1473]: time="2024-12-13T01:16:34.412867373Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:34.412985 containerd[1473]: time="2024-12-13T01:16:34.412938146Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:34.420006 containerd[1473]: time="2024-12-13T01:16:34.419976726Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:34.420043 containerd[1473]: time="2024-12-13T01:16:34.420032040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:34.420064 containerd[1473]: time="2024-12-13T01:16:34.420047409Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:34.420159 containerd[1473]: time="2024-12-13T01:16:34.420135925Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:34.420159 containerd[1473]: time="2024-12-13T01:16:34.420156112Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:34.420351 containerd[1473]: time="2024-12-13T01:16:34.420324789Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:34.420656 containerd[1473]: time="2024-12-13T01:16:34.420623599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420840386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420861646Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420884528Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420901009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420915076Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420929132Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420944301Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420960070Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420974748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.420989505Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.421003862Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.421030813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.421044839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421321 containerd[1473]: time="2024-12-13T01:16:34.421059366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421072000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421084033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421097207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421109290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421122074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421134808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421149746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421161398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421173941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421205771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421220228Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421246878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421259752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.421581 containerd[1473]: time="2024-12-13T01:16:34.421271414Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.421920 containerd[1473]: time="2024-12-13T01:16:34.421904181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:34.422060 containerd[1473]: time="2024-12-13T01:16:34.422043442Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:34.422111 containerd[1473]: time="2024-12-13T01:16:34.422099858Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:34.422159 containerd[1473]: time="2024-12-13T01:16:34.422146124Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:34.422220 containerd[1473]: time="2024-12-13T01:16:34.422208181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.422274 containerd[1473]: time="2024-12-13T01:16:34.422262613Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:34.422319 containerd[1473]: time="2024-12-13T01:16:34.422308669Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:34.422366 containerd[1473]: time="2024-12-13T01:16:34.422354926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:34.422722 containerd[1473]: time="2024-12-13T01:16:34.422674896Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:34.422889 containerd[1473]: time="2024-12-13T01:16:34.422868679Z" level=info msg="Connect containerd service" Dec 13 01:16:34.422964 containerd[1473]: time="2024-12-13T01:16:34.422953008Z" level=info msg="using legacy CRI server" Dec 13 01:16:34.423008 containerd[1473]: time="2024-12-13T01:16:34.422996900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:34.423130 containerd[1473]: time="2024-12-13T01:16:34.423117266Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:34.423790 containerd[1473]: time="2024-12-13T01:16:34.423768928Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:34.423971 containerd[1473]: time="2024-12-13T01:16:34.423943906Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:34.424228 containerd[1473]: time="2024-12-13T01:16:34.424214022Z" level=info msg="Start recovering state" Dec 13 01:16:34.424334 containerd[1473]: time="2024-12-13T01:16:34.424321414Z" level=info msg="Start event monitor" Dec 13 01:16:34.424517 containerd[1473]: time="2024-12-13T01:16:34.424386857Z" level=info msg="Start snapshots syncer" Dec 13 01:16:34.424517 containerd[1473]: time="2024-12-13T01:16:34.424399661Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:34.424517 containerd[1473]: time="2024-12-13T01:16:34.424409409Z" level=info msg="Start streaming server" Dec 13 01:16:34.424691 containerd[1473]: time="2024-12-13T01:16:34.424676911Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:34.424782 containerd[1473]: time="2024-12-13T01:16:34.424769805Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:34.424884 containerd[1473]: time="2024-12-13T01:16:34.424863931Z" level=info msg="containerd successfully booted in 0.041359s" Dec 13 01:16:34.424956 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:34.507663 systemd[1530]: Queued start job for default target default.target. Dec 13 01:16:34.524459 systemd[1530]: Created slice app.slice - User Application Slice. Dec 13 01:16:34.524485 systemd[1530]: Reached target paths.target - Paths. Dec 13 01:16:34.524498 systemd[1530]: Reached target timers.target - Timers. Dec 13 01:16:34.526000 systemd[1530]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:16:34.538211 systemd[1530]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:16:34.538328 systemd[1530]: Reached target sockets.target - Sockets. Dec 13 01:16:34.538341 systemd[1530]: Reached target basic.target - Basic System. Dec 13 01:16:34.538375 systemd[1530]: Reached target default.target - Main User Target. Dec 13 01:16:34.538404 systemd[1530]: Startup finished in 128ms. Dec 13 01:16:34.539011 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:16:34.541584 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:16:34.601933 systemd[1]: Started sshd@1-10.0.0.143:22-10.0.0.1:46454.service - OpenSSH per-connection server daemon (10.0.0.1:46454). Dec 13 01:16:34.612821 tar[1457]: linux-amd64/LICENSE Dec 13 01:16:34.612821 tar[1457]: linux-amd64/README.md Dec 13 01:16:34.625590 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:34.640109 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 46454 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:34.641616 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:34.645608 systemd-logind[1449]: New session 2 of user core. Dec 13 01:16:34.656287 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:16:34.711002 sshd[1543]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:34.717687 systemd[1]: sshd@1-10.0.0.143:22-10.0.0.1:46454.service: Deactivated successfully. Dec 13 01:16:34.719331 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:16:34.720557 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:16:34.729393 systemd[1]: Started sshd@2-10.0.0.143:22-10.0.0.1:46456.service - OpenSSH per-connection server daemon (10.0.0.1:46456). Dec 13 01:16:34.731484 systemd-logind[1449]: Removed session 2. Dec 13 01:16:34.764295 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 46456 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:34.765904 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:34.769394 systemd-logind[1449]: New session 3 of user core. Dec 13 01:16:34.788293 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:16:34.843885 sshd[1553]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:34.848282 systemd[1]: sshd@2-10.0.0.143:22-10.0.0.1:46456.service: Deactivated successfully. Dec 13 01:16:34.849855 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:16:34.850471 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:16:34.851170 systemd-logind[1449]: Removed session 3. Dec 13 01:16:34.897305 systemd-networkd[1403]: eth0: Gained IPv6LL Dec 13 01:16:34.900201 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:34.902075 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:34.914394 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:16:34.917225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:34.919446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:34.939382 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:16:34.939618 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:16:34.941506 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:34.947144 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:35.513045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:35.514625 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:35.518255 systemd[1]: Startup finished in 707ms (kernel) + 5.760s (initrd) + 3.552s (userspace) = 10.020s. Dec 13 01:16:35.537547 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:35.996995 kubelet[1581]: E1213 01:16:35.996740 1581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:36.002765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:36.002988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:44.854655 systemd[1]: Started sshd@3-10.0.0.143:22-10.0.0.1:50386.service - OpenSSH per-connection server daemon (10.0.0.1:50386). Dec 13 01:16:44.891848 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 50386 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:44.893479 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:44.897073 systemd-logind[1449]: New session 4 of user core. Dec 13 01:16:44.907284 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:16:44.961801 sshd[1596]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:44.972429 systemd[1]: sshd@3-10.0.0.143:22-10.0.0.1:50386.service: Deactivated successfully. Dec 13 01:16:44.974103 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:16:44.975496 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:16:44.985417 systemd[1]: Started sshd@4-10.0.0.143:22-10.0.0.1:50402.service - OpenSSH per-connection server daemon (10.0.0.1:50402). Dec 13 01:16:44.986367 systemd-logind[1449]: Removed session 4. Dec 13 01:16:45.018957 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 50402 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:45.020426 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.024443 systemd-logind[1449]: New session 5 of user core. Dec 13 01:16:45.034290 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:16:45.084559 sshd[1603]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:45.094275 systemd[1]: sshd@4-10.0.0.143:22-10.0.0.1:50402.service: Deactivated successfully. Dec 13 01:16:45.095972 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:16:45.097605 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:16:45.103715 systemd[1]: Started sshd@5-10.0.0.143:22-10.0.0.1:50406.service - OpenSSH per-connection server daemon (10.0.0.1:50406). Dec 13 01:16:45.104819 systemd-logind[1449]: Removed session 5. Dec 13 01:16:45.135930 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 50406 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:45.137451 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.141459 systemd-logind[1449]: New session 6 of user core. Dec 13 01:16:45.152296 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:16:45.207300 sshd[1610]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:45.220901 systemd[1]: sshd@5-10.0.0.143:22-10.0.0.1:50406.service: Deactivated successfully. Dec 13 01:16:45.222658 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:16:45.224294 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:16:45.235413 systemd[1]: Started sshd@6-10.0.0.143:22-10.0.0.1:50418.service - OpenSSH per-connection server daemon (10.0.0.1:50418). Dec 13 01:16:45.236371 systemd-logind[1449]: Removed session 6. Dec 13 01:16:45.269035 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 50418 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:45.270485 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.274309 systemd-logind[1449]: New session 7 of user core. Dec 13 01:16:45.288301 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:16:45.345273 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:16:45.345626 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:45.371250 sudo[1620]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:45.373431 sshd[1617]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:45.382982 systemd[1]: sshd@6-10.0.0.143:22-10.0.0.1:50418.service: Deactivated successfully. Dec 13 01:16:45.384741 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:16:45.386374 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:16:45.393410 systemd[1]: Started sshd@7-10.0.0.143:22-10.0.0.1:50422.service - OpenSSH per-connection server daemon (10.0.0.1:50422). Dec 13 01:16:45.394175 systemd-logind[1449]: Removed session 7. Dec 13 01:16:45.427351 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 50422 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:45.428983 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.432788 systemd-logind[1449]: New session 8 of user core. Dec 13 01:16:45.443301 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:16:45.497637 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:16:45.497975 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:45.501332 sudo[1629]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:45.507501 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:16:45.507858 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:45.530410 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:45.531986 auditctl[1632]: No rules Dec 13 01:16:45.533234 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:16:45.533510 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:45.535295 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:45.565438 augenrules[1650]: No rules Dec 13 01:16:45.567301 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:45.568610 sudo[1628]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:45.570443 sshd[1625]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:45.579748 systemd[1]: sshd@7-10.0.0.143:22-10.0.0.1:50422.service: Deactivated successfully. Dec 13 01:16:45.581273 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:16:45.582876 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:16:45.588384 systemd[1]: Started sshd@8-10.0.0.143:22-10.0.0.1:50428.service - OpenSSH per-connection server daemon (10.0.0.1:50428). Dec 13 01:16:45.589154 systemd-logind[1449]: Removed session 8. Dec 13 01:16:45.621004 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 50428 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:16:45.622479 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:45.626133 systemd-logind[1449]: New session 9 of user core. Dec 13 01:16:45.636289 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:16:45.688953 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:16:45.689338 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:45.965387 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:16:45.965580 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:16:46.236797 dockerd[1680]: time="2024-12-13T01:16:46.236643534Z" level=info msg="Starting up" Dec 13 01:16:46.237834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:46.245642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:46.483661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:46.487883 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:46.575765 dockerd[1680]: time="2024-12-13T01:16:46.575715876Z" level=info msg="Loading containers: start." Dec 13 01:16:46.575952 kubelet[1712]: E1213 01:16:46.575745 1712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:46.584162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:46.584380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:46.685203 kernel: Initializing XFRM netlink socket Dec 13 01:16:46.763850 systemd-networkd[1403]: docker0: Link UP Dec 13 01:16:46.783793 dockerd[1680]: time="2024-12-13T01:16:46.783753270Z" level=info msg="Loading containers: done." Dec 13 01:16:46.796928 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2972301303-merged.mount: Deactivated successfully. Dec 13 01:16:46.799110 dockerd[1680]: time="2024-12-13T01:16:46.799070365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:16:46.799209 dockerd[1680]: time="2024-12-13T01:16:46.799150926Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:16:46.799312 dockerd[1680]: time="2024-12-13T01:16:46.799284276Z" level=info msg="Daemon has completed initialization" Dec 13 01:16:46.835572 dockerd[1680]: time="2024-12-13T01:16:46.835011548Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:16:46.835773 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:16:47.609128 containerd[1473]: time="2024-12-13T01:16:47.609039126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:16:48.264452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157603972.mount: Deactivated successfully. Dec 13 01:16:49.302874 containerd[1473]: time="2024-12-13T01:16:49.302814109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:49.303709 containerd[1473]: time="2024-12-13T01:16:49.303658983Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:16:49.305120 containerd[1473]: time="2024-12-13T01:16:49.305072985Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:49.307893 containerd[1473]: time="2024-12-13T01:16:49.307843941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:49.309005 containerd[1473]: time="2024-12-13T01:16:49.308952751Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.69987373s" Dec 13 01:16:49.309067 containerd[1473]: time="2024-12-13T01:16:49.309004297Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:16:49.331433 containerd[1473]: time="2024-12-13T01:16:49.331392633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:16:51.418150 containerd[1473]: time="2024-12-13T01:16:51.418065740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:51.419022 containerd[1473]: time="2024-12-13T01:16:51.418923168Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:16:51.420289 containerd[1473]: time="2024-12-13T01:16:51.420257421Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:51.423007 containerd[1473]: time="2024-12-13T01:16:51.422958857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:51.424050 containerd[1473]: time="2024-12-13T01:16:51.423984831Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.092558895s" Dec 13 01:16:51.424050 containerd[1473]: time="2024-12-13T01:16:51.424036698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:16:51.450261 containerd[1473]: time="2024-12-13T01:16:51.450209431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:16:52.341667 containerd[1473]: time="2024-12-13T01:16:52.341592352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.342417 containerd[1473]: time="2024-12-13T01:16:52.342364931Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:16:52.343678 containerd[1473]: time="2024-12-13T01:16:52.343636556Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.346401 containerd[1473]: time="2024-12-13T01:16:52.346361246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:52.349421 containerd[1473]: time="2024-12-13T01:16:52.347478201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 897.213496ms" Dec 13 01:16:52.349421 containerd[1473]: time="2024-12-13T01:16:52.347510311Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:16:52.372061 containerd[1473]: time="2024-12-13T01:16:52.372009996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:16:53.407775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213693280.mount: Deactivated successfully. Dec 13 01:16:54.064038 containerd[1473]: time="2024-12-13T01:16:54.063967841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.064986 containerd[1473]: time="2024-12-13T01:16:54.064947178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:16:54.066587 containerd[1473]: time="2024-12-13T01:16:54.066514798Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.068786 containerd[1473]: time="2024-12-13T01:16:54.068736855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:54.069256 containerd[1473]: time="2024-12-13T01:16:54.069224189Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.697176662s" Dec 13 01:16:54.069291 containerd[1473]: time="2024-12-13T01:16:54.069256419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:16:54.091534 containerd[1473]: time="2024-12-13T01:16:54.091495095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:16:54.729783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount993954401.mount: Deactivated successfully. Dec 13 01:16:55.373163 containerd[1473]: time="2024-12-13T01:16:55.373099140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.373837 containerd[1473]: time="2024-12-13T01:16:55.373794784Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:16:55.374930 containerd[1473]: time="2024-12-13T01:16:55.374897562Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.377732 containerd[1473]: time="2024-12-13T01:16:55.377677155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.378803 containerd[1473]: time="2024-12-13T01:16:55.378764084Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.287225377s" Dec 13 01:16:55.378843 containerd[1473]: time="2024-12-13T01:16:55.378808827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:16:55.401048 containerd[1473]: time="2024-12-13T01:16:55.401006125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:16:55.900878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514591553.mount: Deactivated successfully. Dec 13 01:16:55.906676 containerd[1473]: time="2024-12-13T01:16:55.906613149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.907390 containerd[1473]: time="2024-12-13T01:16:55.907332308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:16:55.908576 containerd[1473]: time="2024-12-13T01:16:55.908545042Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.910685 containerd[1473]: time="2024-12-13T01:16:55.910651853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:55.911351 containerd[1473]: time="2024-12-13T01:16:55.911320016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 510.266301ms" Dec 13 01:16:55.911391 containerd[1473]: time="2024-12-13T01:16:55.911352267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:16:55.934457 containerd[1473]: time="2024-12-13T01:16:55.934431659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:16:56.460243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373270649.mount: Deactivated successfully. Dec 13 01:16:56.834621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:16:56.841413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:57.007366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:57.012050 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:57.364929 kubelet[2057]: E1213 01:16:57.364777 2057 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:57.369530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:57.369729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:58.844538 containerd[1473]: time="2024-12-13T01:16:58.844474728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.845320 containerd[1473]: time="2024-12-13T01:16:58.845283665Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:16:58.846536 containerd[1473]: time="2024-12-13T01:16:58.846511438Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.849462 containerd[1473]: time="2024-12-13T01:16:58.849419511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:58.850393 containerd[1473]: time="2024-12-13T01:16:58.850353984Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.915894533s" Dec 13 01:16:58.850393 containerd[1473]: time="2024-12-13T01:16:58.850385613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:17:01.278598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:01.286363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:01.300914 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-9.scope)... Dec 13 01:17:01.300928 systemd[1]: Reloading... Dec 13 01:17:01.374273 zram_generator::config[2195]: No configuration found. Dec 13 01:17:01.565806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:01.639909 systemd[1]: Reloading finished in 338 ms. Dec 13 01:17:01.689848 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:01.693977 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:01.694271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:01.706435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:01.845931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:01.851643 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:01.891306 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:01.891306 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:01.891306 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:01.891682 kubelet[2243]: I1213 01:17:01.891364 2243 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:02.118469 kubelet[2243]: I1213 01:17:02.118374 2243 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:02.118469 kubelet[2243]: I1213 01:17:02.118398 2243 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:02.118585 kubelet[2243]: I1213 01:17:02.118577 2243 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:02.135108 kubelet[2243]: E1213 01:17:02.135080 2243 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.135834 kubelet[2243]: I1213 01:17:02.135820 2243 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:02.145820 kubelet[2243]: I1213 01:17:02.145792 2243 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:02.146859 kubelet[2243]: I1213 01:17:02.146833 2243 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:02.146991 kubelet[2243]: I1213 01:17:02.146969 2243 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:02.147062 kubelet[2243]: I1213 01:17:02.146994 2243 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:02.147062 kubelet[2243]: I1213 01:17:02.147003 2243 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:02.147132 kubelet[2243]: I1213 01:17:02.147117 2243 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:02.147241 kubelet[2243]: I1213 01:17:02.147218 2243 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:02.147241 kubelet[2243]: I1213 01:17:02.147237 2243 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:02.147294 kubelet[2243]: I1213 01:17:02.147263 2243 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:02.147294 kubelet[2243]: I1213 01:17:02.147278 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:02.148244 kubelet[2243]: I1213 01:17:02.148223 2243 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:02.149520 kubelet[2243]: W1213 01:17:02.149451 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.149520 kubelet[2243]: E1213 01:17:02.149501 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.149607 kubelet[2243]: W1213 01:17:02.149560 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.149607 kubelet[2243]: E1213 01:17:02.149596 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.154100 kubelet[2243]: I1213 01:17:02.154077 2243 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:02.154934 kubelet[2243]: W1213 01:17:02.154916 2243 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:17:02.155680 kubelet[2243]: I1213 01:17:02.155518 2243 server.go:1256] "Started kubelet" Dec 13 01:17:02.156409 kubelet[2243]: I1213 01:17:02.155907 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:02.156409 kubelet[2243]: I1213 01:17:02.156257 2243 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:02.156409 kubelet[2243]: I1213 01:17:02.156303 2243 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:02.156945 kubelet[2243]: I1213 01:17:02.156682 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:02.157695 kubelet[2243]: I1213 01:17:02.157162 2243 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:02.160206 kubelet[2243]: E1213 01:17:02.159790 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:02.160206 kubelet[2243]: I1213 01:17:02.159920 2243 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:02.160206 kubelet[2243]: I1213 01:17:02.159990 2243 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:02.160206 kubelet[2243]: I1213 01:17:02.160035 2243 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:02.160356 kubelet[2243]: W1213 01:17:02.160310 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.160356 kubelet[2243]: E1213 01:17:02.160353 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.161281 kubelet[2243]: E1213 01:17:02.160556 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="200ms" Dec 13 01:17:02.161281 kubelet[2243]: I1213 01:17:02.161110 2243 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:02.161281 kubelet[2243]: I1213 01:17:02.161214 2243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:02.161539 kubelet[2243]: E1213 01:17:02.161521 2243 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.143:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.143:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097ac2569b4ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:02.155494572 +0000 UTC m=+0.299636626,LastTimestamp:2024-12-13 01:17:02.155494572 +0000 UTC m=+0.299636626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:17:02.161657 kubelet[2243]: E1213 01:17:02.161533 2243 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:02.162192 kubelet[2243]: I1213 01:17:02.162157 2243 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:02.174578 kubelet[2243]: I1213 01:17:02.174547 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:02.175892 kubelet[2243]: I1213 01:17:02.175869 2243 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:02.175892 kubelet[2243]: I1213 01:17:02.175890 2243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:02.175992 kubelet[2243]: I1213 01:17:02.175905 2243 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:02.176092 kubelet[2243]: I1213 01:17:02.176065 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:02.176092 kubelet[2243]: I1213 01:17:02.176092 2243 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:02.176171 kubelet[2243]: I1213 01:17:02.176110 2243 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:02.176171 kubelet[2243]: E1213 01:17:02.176159 2243 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:02.177211 kubelet[2243]: W1213 01:17:02.177157 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.177211 kubelet[2243]: E1213 01:17:02.177202 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:02.261529 kubelet[2243]: I1213 01:17:02.261486 2243 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:02.261783 kubelet[2243]: E1213 01:17:02.261752 2243 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Dec 13 01:17:02.276886 kubelet[2243]: E1213 01:17:02.276856 2243 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:02.361304 kubelet[2243]: E1213 01:17:02.361281 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="400ms" Dec 13 01:17:02.454659 kubelet[2243]: I1213 01:17:02.454582 2243 policy_none.go:49] "None policy: Start" Dec 13 01:17:02.455219 kubelet[2243]: I1213 01:17:02.455054 2243 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:02.455219 kubelet[2243]: I1213 01:17:02.455092 2243 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:02.462605 kubelet[2243]: I1213 01:17:02.462567 2243 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:02.462779 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:17:02.463210 kubelet[2243]: E1213 01:17:02.462797 2243 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Dec 13 01:17:02.476248 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:17:02.476920 kubelet[2243]: E1213 01:17:02.476899 2243 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:02.479240 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:17:02.490950 kubelet[2243]: I1213 01:17:02.490920 2243 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:02.491271 kubelet[2243]: I1213 01:17:02.491206 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:02.492048 kubelet[2243]: E1213 01:17:02.492031 2243 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:17:02.762116 kubelet[2243]: E1213 01:17:02.762042 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="800ms" Dec 13 01:17:02.864270 kubelet[2243]: I1213 01:17:02.864252 2243 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:02.864527 kubelet[2243]: E1213 01:17:02.864504 2243 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Dec 13 01:17:02.877657 kubelet[2243]: I1213 01:17:02.877624 2243 topology_manager.go:215] "Topology Admit Handler" podUID="09d0f1dc5b33d63f31e7283af1547342" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:02.878364 kubelet[2243]: I1213 01:17:02.878340 2243 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:02.879015 kubelet[2243]: I1213 01:17:02.878996 2243 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:02.884276 systemd[1]: Created slice kubepods-burstable-pod09d0f1dc5b33d63f31e7283af1547342.slice - libcontainer container kubepods-burstable-pod09d0f1dc5b33d63f31e7283af1547342.slice. Dec 13 01:17:02.910073 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:17:02.922000 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:17:02.963689 kubelet[2243]: I1213 01:17:02.963651 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:02.963689 kubelet[2243]: I1213 01:17:02.963678 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:02.964045 kubelet[2243]: I1213 01:17:02.963697 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:02.964045 kubelet[2243]: I1213 01:17:02.963725 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:02.964045 kubelet[2243]: I1213 01:17:02.963783 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:02.964045 kubelet[2243]: I1213 01:17:02.963817 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:02.964045 kubelet[2243]: I1213 01:17:02.963843 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:02.964151 kubelet[2243]: I1213 01:17:02.963868 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:02.964151 kubelet[2243]: I1213 01:17:02.963892 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:03.206979 kubelet[2243]: E1213 01:17:03.206879 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:03.207434 containerd[1473]: time="2024-12-13T01:17:03.207388759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09d0f1dc5b33d63f31e7283af1547342,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:03.220557 kubelet[2243]: E1213 01:17:03.220533 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:03.220828 containerd[1473]: time="2024-12-13T01:17:03.220800040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:03.224058 kubelet[2243]: E1213 01:17:03.224041 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:03.224387 containerd[1473]: time="2024-12-13T01:17:03.224360337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:03.231001 kubelet[2243]: W1213 01:17:03.230955 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.231053 kubelet[2243]: E1213 01:17:03.231004 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.502731 kubelet[2243]: W1213 01:17:03.502629 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.502731 kubelet[2243]: E1213 01:17:03.502688 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.520195 kubelet[2243]: W1213 01:17:03.520137 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.520195 kubelet[2243]: E1213 01:17:03.520194 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.562732 kubelet[2243]: E1213 01:17:03.562694 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.143:6443: connect: connection refused" interval="1.6s" Dec 13 01:17:03.666049 kubelet[2243]: I1213 01:17:03.666031 2243 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:03.666292 kubelet[2243]: E1213 01:17:03.666266 2243 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.143:6443/api/v1/nodes\": dial tcp 10.0.0.143:6443: connect: connection refused" node="localhost" Dec 13 01:17:03.717689 kubelet[2243]: W1213 01:17:03.717648 2243 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.717748 kubelet[2243]: E1213 01:17:03.717693 2243 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.143:6443: connect: connection refused Dec 13 01:17:03.738020 kubelet[2243]: E1213 01:17:03.737992 2243 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.143:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.143:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181097ac2569b4ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:17:02.155494572 +0000 UTC m=+0.299636626,LastTimestamp:2024-12-13 01:17:02.155494572 +0000 UTC m=+0.299636626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:17:03.771533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13367403.mount: Deactivated successfully. Dec 13 01:17:03.779092 containerd[1473]: time="2024-12-13T01:17:03.779063493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:03.781027 containerd[1473]: time="2024-12-13T01:17:03.780958928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:03.781979 containerd[1473]: time="2024-12-13T01:17:03.781947893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:03.783058 containerd[1473]: time="2024-12-13T01:17:03.783032857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:03.784095 containerd[1473]: time="2024-12-13T01:17:03.784056106Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:03.785055 containerd[1473]: time="2024-12-13T01:17:03.784998033Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:17:03.785975 containerd[1473]: time="2024-12-13T01:17:03.785945359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:17:03.789283 containerd[1473]: time="2024-12-13T01:17:03.789243685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:17:03.789965 containerd[1473]: time="2024-12-13T01:17:03.789923069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.07074ms" Dec 13 01:17:03.790645 containerd[1473]: time="2024-12-13T01:17:03.790614596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 566.19106ms" Dec 13 01:17:03.792928 containerd[1473]: time="2024-12-13T01:17:03.792903218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.422726ms" Dec 13 01:17:03.938578 containerd[1473]: time="2024-12-13T01:17:03.938471019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:03.938578 containerd[1473]: time="2024-12-13T01:17:03.938521524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:03.938578 containerd[1473]: time="2024-12-13T01:17:03.938534699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.938872 containerd[1473]: time="2024-12-13T01:17:03.938614458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.940559 containerd[1473]: time="2024-12-13T01:17:03.939868500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:03.940559 containerd[1473]: time="2024-12-13T01:17:03.939916981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:03.940559 containerd[1473]: time="2024-12-13T01:17:03.939927180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.940559 containerd[1473]: time="2024-12-13T01:17:03.939981502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.941421 containerd[1473]: time="2024-12-13T01:17:03.941139734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:03.941421 containerd[1473]: time="2024-12-13T01:17:03.941396947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:03.941568 containerd[1473]: time="2024-12-13T01:17:03.941497525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.941672 containerd[1473]: time="2024-12-13T01:17:03.941610146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:03.962480 systemd[1]: Started cri-containerd-3b1b50e9db63ebc0660898ae646f3fc54404b912ee92a739c53e90766a2150b7.scope - libcontainer container 3b1b50e9db63ebc0660898ae646f3fc54404b912ee92a739c53e90766a2150b7. Dec 13 01:17:03.964458 systemd[1]: Started cri-containerd-3f2618dcdc4d67fc1ec83ef13a4f53dd4c993b1c19d0e35802cb6625b373634b.scope - libcontainer container 3f2618dcdc4d67fc1ec83ef13a4f53dd4c993b1c19d0e35802cb6625b373634b. Dec 13 01:17:03.968971 systemd[1]: Started cri-containerd-5ba4885b2f60cc6bd1f4dadc13efdfafd1a50d043e3d9bafadcddf84e2703ee4.scope - libcontainer container 5ba4885b2f60cc6bd1f4dadc13efdfafd1a50d043e3d9bafadcddf84e2703ee4. Dec 13 01:17:04.007467 containerd[1473]: time="2024-12-13T01:17:04.007392001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09d0f1dc5b33d63f31e7283af1547342,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b1b50e9db63ebc0660898ae646f3fc54404b912ee92a739c53e90766a2150b7\"" Dec 13 01:17:04.009084 kubelet[2243]: E1213 01:17:04.009059 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:04.012640 containerd[1473]: time="2024-12-13T01:17:04.012335522Z" level=info msg="CreateContainer within sandbox \"3b1b50e9db63ebc0660898ae646f3fc54404b912ee92a739c53e90766a2150b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:17:04.013301 containerd[1473]: time="2024-12-13T01:17:04.013275915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ba4885b2f60cc6bd1f4dadc13efdfafd1a50d043e3d9bafadcddf84e2703ee4\"" Dec 13 01:17:04.014333 containerd[1473]: time="2024-12-13T01:17:04.014256033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f2618dcdc4d67fc1ec83ef13a4f53dd4c993b1c19d0e35802cb6625b373634b\"" Dec 13 01:17:04.014568 kubelet[2243]: E1213 01:17:04.014549 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:04.014931 kubelet[2243]: E1213 01:17:04.014878 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:04.016683 containerd[1473]: time="2024-12-13T01:17:04.016648540Z" level=info msg="CreateContainer within sandbox \"3f2618dcdc4d67fc1ec83ef13a4f53dd4c993b1c19d0e35802cb6625b373634b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:17:04.017421 containerd[1473]: time="2024-12-13T01:17:04.017273883Z" level=info msg="CreateContainer within sandbox \"5ba4885b2f60cc6bd1f4dadc13efdfafd1a50d043e3d9bafadcddf84e2703ee4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:17:04.069816 containerd[1473]: time="2024-12-13T01:17:04.069719026Z" level=info msg="CreateContainer within sandbox \"5ba4885b2f60cc6bd1f4dadc13efdfafd1a50d043e3d9bafadcddf84e2703ee4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51856766d509bdbded2606ac6dc0157e80bf47651903db430e235a9c18de36e8\"" Dec 13 01:17:04.070489 containerd[1473]: time="2024-12-13T01:17:04.070440229Z" level=info msg="StartContainer for \"51856766d509bdbded2606ac6dc0157e80bf47651903db430e235a9c18de36e8\"" Dec 13 01:17:04.077276 containerd[1473]: time="2024-12-13T01:17:04.077236715Z" level=info msg="CreateContainer within sandbox \"3b1b50e9db63ebc0660898ae646f3fc54404b912ee92a739c53e90766a2150b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be39d8125f275296550b1411b5bdd3b84b01cd775211a8eafdfce81cfc7283d0\"" Dec 13 01:17:04.077698 containerd[1473]: time="2024-12-13T01:17:04.077671250Z" level=info msg="StartContainer for \"be39d8125f275296550b1411b5bdd3b84b01cd775211a8eafdfce81cfc7283d0\"" Dec 13 01:17:04.083428 containerd[1473]: time="2024-12-13T01:17:04.083390656Z" level=info msg="CreateContainer within sandbox \"3f2618dcdc4d67fc1ec83ef13a4f53dd4c993b1c19d0e35802cb6625b373634b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"200242035b454d44c8322088937e2295fa15b46e1a21c5024697584319f109ec\"" Dec 13 01:17:04.084041 containerd[1473]: time="2024-12-13T01:17:04.083983347Z" level=info msg="StartContainer for \"200242035b454d44c8322088937e2295fa15b46e1a21c5024697584319f109ec\"" Dec 13 01:17:04.098344 systemd[1]: Started cri-containerd-51856766d509bdbded2606ac6dc0157e80bf47651903db430e235a9c18de36e8.scope - libcontainer container 51856766d509bdbded2606ac6dc0157e80bf47651903db430e235a9c18de36e8. Dec 13 01:17:04.103671 systemd[1]: Started cri-containerd-be39d8125f275296550b1411b5bdd3b84b01cd775211a8eafdfce81cfc7283d0.scope - libcontainer container be39d8125f275296550b1411b5bdd3b84b01cd775211a8eafdfce81cfc7283d0. Dec 13 01:17:04.116328 systemd[1]: Started cri-containerd-200242035b454d44c8322088937e2295fa15b46e1a21c5024697584319f109ec.scope - libcontainer container 200242035b454d44c8322088937e2295fa15b46e1a21c5024697584319f109ec. Dec 13 01:17:04.156936 containerd[1473]: time="2024-12-13T01:17:04.156658389Z" level=info msg="StartContainer for \"51856766d509bdbded2606ac6dc0157e80bf47651903db430e235a9c18de36e8\" returns successfully" Dec 13 01:17:04.160641 containerd[1473]: time="2024-12-13T01:17:04.160391970Z" level=info msg="StartContainer for \"be39d8125f275296550b1411b5bdd3b84b01cd775211a8eafdfce81cfc7283d0\" returns successfully" Dec 13 01:17:04.164250 containerd[1473]: time="2024-12-13T01:17:04.164221212Z" level=info msg="StartContainer for \"200242035b454d44c8322088937e2295fa15b46e1a21c5024697584319f109ec\" returns successfully" Dec 13 01:17:04.187760 kubelet[2243]: E1213 01:17:04.187716 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:04.191288 kubelet[2243]: E1213 01:17:04.191261 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:04.193729 kubelet[2243]: E1213 01:17:04.193703 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:05.166291 kubelet[2243]: E1213 01:17:05.166254 2243 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:17:05.195573 kubelet[2243]: E1213 01:17:05.195549 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:05.268406 kubelet[2243]: I1213 01:17:05.268388 2243 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:05.275239 kubelet[2243]: I1213 01:17:05.275208 2243 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:05.281511 kubelet[2243]: E1213 01:17:05.281495 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.382230 kubelet[2243]: E1213 01:17:05.382162 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.482875 kubelet[2243]: E1213 01:17:05.482713 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.583466 kubelet[2243]: E1213 01:17:05.583422 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.684205 kubelet[2243]: E1213 01:17:05.684129 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.784822 kubelet[2243]: E1213 01:17:05.784701 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.885303 kubelet[2243]: E1213 01:17:05.885248 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:05.985822 kubelet[2243]: E1213 01:17:05.985785 2243 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:17:06.151678 kubelet[2243]: I1213 01:17:06.151554 2243 apiserver.go:52] "Watching apiserver" Dec 13 01:17:06.160553 kubelet[2243]: I1213 01:17:06.160530 2243 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:07.968518 systemd[1]: Reloading requested from client PID 2525 ('systemctl') (unit session-9.scope)... Dec 13 01:17:07.968540 systemd[1]: Reloading... Dec 13 01:17:08.044332 zram_generator::config[2573]: No configuration found. Dec 13 01:17:08.142631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:08.230163 systemd[1]: Reloading finished in 261 ms. Dec 13 01:17:08.275461 kubelet[2243]: I1213 01:17:08.275423 2243 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:08.275493 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:08.284795 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:08.285020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:08.296416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:08.435097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:08.439904 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:08.486634 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:08.486634 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:08.486634 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:08.486981 kubelet[2609]: I1213 01:17:08.486626 2609 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:08.491273 kubelet[2609]: I1213 01:17:08.491241 2609 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:17:08.491273 kubelet[2609]: I1213 01:17:08.491263 2609 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:08.491437 kubelet[2609]: I1213 01:17:08.491418 2609 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:17:08.493802 kubelet[2609]: I1213 01:17:08.492934 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:08.496139 kubelet[2609]: I1213 01:17:08.496104 2609 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:08.505846 kubelet[2609]: I1213 01:17:08.505799 2609 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:08.506080 kubelet[2609]: I1213 01:17:08.506056 2609 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:08.506256 kubelet[2609]: I1213 01:17:08.506229 2609 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:08.506256 kubelet[2609]: I1213 01:17:08.506256 2609 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:08.506366 kubelet[2609]: I1213 01:17:08.506265 2609 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:08.506366 kubelet[2609]: I1213 01:17:08.506298 2609 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:08.506409 kubelet[2609]: I1213 01:17:08.506396 2609 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:17:08.506437 kubelet[2609]: I1213 01:17:08.506422 2609 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:08.506493 kubelet[2609]: I1213 01:17:08.506475 2609 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:08.507148 kubelet[2609]: I1213 01:17:08.507122 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:08.508856 kubelet[2609]: I1213 01:17:08.507824 2609 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:08.508856 kubelet[2609]: I1213 01:17:08.508219 2609 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:08.508144 sudo[2624]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:17:08.508514 sudo[2624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:17:08.509669 kubelet[2609]: I1213 01:17:08.509645 2609 server.go:1256] "Started kubelet" Dec 13 01:17:08.512648 kubelet[2609]: I1213 01:17:08.512349 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:08.513625 kubelet[2609]: I1213 01:17:08.512947 2609 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:08.513625 kubelet[2609]: I1213 01:17:08.512991 2609 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:08.513779 kubelet[2609]: I1213 01:17:08.513749 2609 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:17:08.519254 kubelet[2609]: I1213 01:17:08.516593 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:08.521361 kubelet[2609]: I1213 01:17:08.521335 2609 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:08.521465 kubelet[2609]: I1213 01:17:08.521447 2609 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:17:08.521605 kubelet[2609]: I1213 01:17:08.521577 2609 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:17:08.524562 kubelet[2609]: E1213 01:17:08.524542 2609 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:08.525271 kubelet[2609]: I1213 01:17:08.525246 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:08.528132 kubelet[2609]: I1213 01:17:08.527261 2609 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:08.528132 kubelet[2609]: I1213 01:17:08.527275 2609 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:08.534000 kubelet[2609]: I1213 01:17:08.533965 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:08.536283 kubelet[2609]: I1213 01:17:08.535915 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:08.536283 kubelet[2609]: I1213 01:17:08.535940 2609 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:08.536283 kubelet[2609]: I1213 01:17:08.535955 2609 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:17:08.536283 kubelet[2609]: E1213 01:17:08.535999 2609 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:08.562943 kubelet[2609]: I1213 01:17:08.562897 2609 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:08.562943 kubelet[2609]: I1213 01:17:08.562918 2609 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:08.562943 kubelet[2609]: I1213 01:17:08.562934 2609 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:08.563117 kubelet[2609]: I1213 01:17:08.563072 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:08.563117 kubelet[2609]: I1213 01:17:08.563090 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:08.563117 kubelet[2609]: I1213 01:17:08.563096 2609 policy_none.go:49] "None policy: Start" Dec 13 01:17:08.563717 kubelet[2609]: I1213 01:17:08.563701 2609 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:08.563717 kubelet[2609]: I1213 01:17:08.563722 2609 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:08.563841 kubelet[2609]: I1213 01:17:08.563829 2609 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:08.567712 kubelet[2609]: I1213 01:17:08.567694 2609 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:08.568148 kubelet[2609]: I1213 01:17:08.567914 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:08.627098 kubelet[2609]: I1213 01:17:08.627064 2609 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:17:08.636411 kubelet[2609]: I1213 01:17:08.636368 2609 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:17:08.636525 kubelet[2609]: I1213 01:17:08.636481 2609 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:17:08.636525 kubelet[2609]: I1213 01:17:08.636514 2609 topology_manager.go:215] "Topology Admit Handler" podUID="09d0f1dc5b33d63f31e7283af1547342" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:17:08.640933 kubelet[2609]: I1213 01:17:08.639931 2609 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:17:08.641398 kubelet[2609]: I1213 01:17:08.641206 2609 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:17:08.823242 kubelet[2609]: I1213 01:17:08.823086 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:08.823242 kubelet[2609]: I1213 01:17:08.823121 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:08.823242 kubelet[2609]: I1213 01:17:08.823140 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:08.823242 kubelet[2609]: I1213 01:17:08.823172 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:17:08.823423 kubelet[2609]: I1213 01:17:08.823408 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:08.825451 kubelet[2609]: I1213 01:17:08.825413 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:08.825524 kubelet[2609]: I1213 01:17:08.825507 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:08.825564 kubelet[2609]: I1213 01:17:08.825549 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:17:08.825633 kubelet[2609]: I1213 01:17:08.825617 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d0f1dc5b33d63f31e7283af1547342-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09d0f1dc5b33d63f31e7283af1547342\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:08.996805 kubelet[2609]: E1213 01:17:08.996595 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:08.996805 kubelet[2609]: E1213 01:17:08.996686 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:08.996805 kubelet[2609]: E1213 01:17:08.996686 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:09.020107 sudo[2624]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:09.508049 kubelet[2609]: I1213 01:17:09.508006 2609 apiserver.go:52] "Watching apiserver" Dec 13 01:17:09.836475 kubelet[2609]: E1213 01:17:09.835969 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:09.836475 kubelet[2609]: E1213 01:17:09.836278 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:09.842586 kubelet[2609]: E1213 01:17:09.842521 2609 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:17:09.843093 kubelet[2609]: E1213 01:17:09.843067 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:09.859526 kubelet[2609]: I1213 01:17:09.859222 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.859133573 podStartE2EDuration="1.859133573s" podCreationTimestamp="2024-12-13 01:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:09.852812563 +0000 UTC m=+1.408281125" watchObservedRunningTime="2024-12-13 01:17:09.859133573 +0000 UTC m=+1.414602115" Dec 13 01:17:09.865793 kubelet[2609]: I1213 01:17:09.865745 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.865696765 podStartE2EDuration="1.865696765s" podCreationTimestamp="2024-12-13 01:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:09.859400522 +0000 UTC m=+1.414869064" watchObservedRunningTime="2024-12-13 01:17:09.865696765 +0000 UTC m=+1.421165307" Dec 13 01:17:09.875240 kubelet[2609]: I1213 01:17:09.874412 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8743492430000002 podStartE2EDuration="1.874349243s" podCreationTimestamp="2024-12-13 01:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:09.86590413 +0000 UTC m=+1.421372672" watchObservedRunningTime="2024-12-13 01:17:09.874349243 +0000 UTC m=+1.429817785" Dec 13 01:17:09.921994 kubelet[2609]: I1213 01:17:09.921936 2609 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:17:10.271541 sudo[1661]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:10.273547 sshd[1658]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:10.277736 systemd[1]: sshd@8-10.0.0.143:22-10.0.0.1:50428.service: Deactivated successfully. Dec 13 01:17:10.279440 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:17:10.279620 systemd[1]: session-9.scope: Consumed 4.488s CPU time, 191.0M memory peak, 0B memory swap peak. Dec 13 01:17:10.280023 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:17:10.280931 systemd-logind[1449]: Removed session 9. Dec 13 01:17:10.837027 kubelet[2609]: E1213 01:17:10.836987 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:13.175737 kubelet[2609]: E1213 01:17:13.175697 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:14.240888 kubelet[2609]: E1213 01:17:14.240850 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:14.843614 kubelet[2609]: E1213 01:17:14.843571 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:18.206391 kubelet[2609]: E1213 01:17:18.206319 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:18.849484 kubelet[2609]: E1213 01:17:18.849460 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:19.504749 update_engine[1451]: I20241213 01:17:19.504678 1451 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:19.567249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2694) Dec 13 01:17:19.599211 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2697) Dec 13 01:17:19.633526 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2697) Dec 13 01:17:22.893855 kubelet[2609]: I1213 01:17:22.893822 2609 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:22.894278 containerd[1473]: time="2024-12-13T01:17:22.894155687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:22.894532 kubelet[2609]: I1213 01:17:22.894371 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:23.180697 kubelet[2609]: E1213 01:17:23.180670 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:23.674070 kubelet[2609]: I1213 01:17:23.674021 2609 topology_manager.go:215] "Topology Admit Handler" podUID="861229fd-7522-4fb6-b500-4b11b21c7bbc" podNamespace="kube-system" podName="kube-proxy-xx6cg" Dec 13 01:17:23.679451 kubelet[2609]: I1213 01:17:23.679425 2609 topology_manager.go:215] "Topology Admit Handler" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" podNamespace="kube-system" podName="cilium-khhz4" Dec 13 01:17:23.684429 systemd[1]: Created slice kubepods-besteffort-pod861229fd_7522_4fb6_b500_4b11b21c7bbc.slice - libcontainer container kubepods-besteffort-pod861229fd_7522_4fb6_b500_4b11b21c7bbc.slice. Dec 13 01:17:23.695968 systemd[1]: Created slice kubepods-burstable-podcd58a079_81e1_405b_a4eb_cd5045926aa5.slice - libcontainer container kubepods-burstable-podcd58a079_81e1_405b_a4eb_cd5045926aa5.slice. Dec 13 01:17:23.717448 kubelet[2609]: I1213 01:17:23.717413 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-etc-cni-netd\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717448 kubelet[2609]: I1213 01:17:23.717449 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cni-path\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717448 kubelet[2609]: I1213 01:17:23.717468 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-hostproc\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717639 kubelet[2609]: I1213 01:17:23.717487 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-cgroup\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717639 kubelet[2609]: I1213 01:17:23.717505 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/861229fd-7522-4fb6-b500-4b11b21c7bbc-kube-proxy\") pod \"kube-proxy-xx6cg\" (UID: \"861229fd-7522-4fb6-b500-4b11b21c7bbc\") " pod="kube-system/kube-proxy-xx6cg" Dec 13 01:17:23.717639 kubelet[2609]: I1213 01:17:23.717524 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpvg\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717639 kubelet[2609]: I1213 01:17:23.717543 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-config-path\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717639 kubelet[2609]: I1213 01:17:23.717561 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-hubble-tls\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717755 kubelet[2609]: I1213 01:17:23.717579 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldns\" (UniqueName: \"kubernetes.io/projected/861229fd-7522-4fb6-b500-4b11b21c7bbc-kube-api-access-vldns\") pod \"kube-proxy-xx6cg\" (UID: \"861229fd-7522-4fb6-b500-4b11b21c7bbc\") " pod="kube-system/kube-proxy-xx6cg" Dec 13 01:17:23.717755 kubelet[2609]: I1213 01:17:23.717598 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd58a079-81e1-405b-a4eb-cd5045926aa5-clustermesh-secrets\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717755 kubelet[2609]: I1213 01:17:23.717617 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-lib-modules\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717755 kubelet[2609]: I1213 01:17:23.717637 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-net\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717755 kubelet[2609]: I1213 01:17:23.717655 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-kernel\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717882 kubelet[2609]: I1213 01:17:23.717673 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/861229fd-7522-4fb6-b500-4b11b21c7bbc-xtables-lock\") pod \"kube-proxy-xx6cg\" (UID: \"861229fd-7522-4fb6-b500-4b11b21c7bbc\") " pod="kube-system/kube-proxy-xx6cg" Dec 13 01:17:23.717882 kubelet[2609]: I1213 01:17:23.717692 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/861229fd-7522-4fb6-b500-4b11b21c7bbc-lib-modules\") pod \"kube-proxy-xx6cg\" (UID: \"861229fd-7522-4fb6-b500-4b11b21c7bbc\") " pod="kube-system/kube-proxy-xx6cg" Dec 13 01:17:23.717882 kubelet[2609]: I1213 01:17:23.717711 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-run\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717882 kubelet[2609]: I1213 01:17:23.717728 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-bpf-maps\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.717882 kubelet[2609]: I1213 01:17:23.717745 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-xtables-lock\") pod \"cilium-khhz4\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " pod="kube-system/cilium-khhz4" Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826172 2609 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826226 2609 projected.go:200] Error preparing data for projected volume kube-api-access-rtpvg for pod kube-system/cilium-khhz4: configmap "kube-root-ca.crt" not found Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826295 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg podName:cd58a079-81e1-405b-a4eb-cd5045926aa5 nodeName:}" failed. No retries permitted until 2024-12-13 01:17:24.326262295 +0000 UTC m=+15.881730827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rtpvg" (UniqueName: "kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg") pod "cilium-khhz4" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5") : configmap "kube-root-ca.crt" not found Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826506 2609 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826519 2609 projected.go:200] Error preparing data for projected volume kube-api-access-vldns for pod kube-system/kube-proxy-xx6cg: configmap "kube-root-ca.crt" not found Dec 13 01:17:23.826731 kubelet[2609]: E1213 01:17:23.826680 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/861229fd-7522-4fb6-b500-4b11b21c7bbc-kube-api-access-vldns podName:861229fd-7522-4fb6-b500-4b11b21c7bbc nodeName:}" failed. No retries permitted until 2024-12-13 01:17:24.326543856 +0000 UTC m=+15.882012398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vldns" (UniqueName: "kubernetes.io/projected/861229fd-7522-4fb6-b500-4b11b21c7bbc-kube-api-access-vldns") pod "kube-proxy-xx6cg" (UID: "861229fd-7522-4fb6-b500-4b11b21c7bbc") : configmap "kube-root-ca.crt" not found Dec 13 01:17:23.921047 kubelet[2609]: I1213 01:17:23.920599 2609 topology_manager.go:215] "Topology Admit Handler" podUID="3361d41b-c539-427b-a4aa-97dc5822a1c1" podNamespace="kube-system" podName="cilium-operator-5cc964979-qgg9k" Dec 13 01:17:23.930505 systemd[1]: Created slice kubepods-besteffort-pod3361d41b_c539_427b_a4aa_97dc5822a1c1.slice - libcontainer container kubepods-besteffort-pod3361d41b_c539_427b_a4aa_97dc5822a1c1.slice. Dec 13 01:17:24.020051 kubelet[2609]: I1213 01:17:24.019987 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3361d41b-c539-427b-a4aa-97dc5822a1c1-cilium-config-path\") pod \"cilium-operator-5cc964979-qgg9k\" (UID: \"3361d41b-c539-427b-a4aa-97dc5822a1c1\") " pod="kube-system/cilium-operator-5cc964979-qgg9k" Dec 13 01:17:24.020175 kubelet[2609]: I1213 01:17:24.020080 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7gq\" (UniqueName: \"kubernetes.io/projected/3361d41b-c539-427b-a4aa-97dc5822a1c1-kube-api-access-mr7gq\") pod \"cilium-operator-5cc964979-qgg9k\" (UID: \"3361d41b-c539-427b-a4aa-97dc5822a1c1\") " pod="kube-system/cilium-operator-5cc964979-qgg9k" Dec 13 01:17:24.233786 kubelet[2609]: E1213 01:17:24.233701 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.234420 containerd[1473]: time="2024-12-13T01:17:24.234084092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qgg9k,Uid:3361d41b-c539-427b-a4aa-97dc5822a1c1,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:24.271127 containerd[1473]: time="2024-12-13T01:17:24.271061681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:24.271793 containerd[1473]: time="2024-12-13T01:17:24.271196154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:24.271793 containerd[1473]: time="2024-12-13T01:17:24.271768273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.271970 containerd[1473]: time="2024-12-13T01:17:24.271868793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.300306 systemd[1]: Started cri-containerd-b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac.scope - libcontainer container b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac. Dec 13 01:17:24.336361 containerd[1473]: time="2024-12-13T01:17:24.336319385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qgg9k,Uid:3361d41b-c539-427b-a4aa-97dc5822a1c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\"" Dec 13 01:17:24.336981 kubelet[2609]: E1213 01:17:24.336956 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.338125 containerd[1473]: time="2024-12-13T01:17:24.338098811Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:17:24.593829 kubelet[2609]: E1213 01:17:24.593723 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.594124 containerd[1473]: time="2024-12-13T01:17:24.594080905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx6cg,Uid:861229fd-7522-4fb6-b500-4b11b21c7bbc,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:24.597997 kubelet[2609]: E1213 01:17:24.597979 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.598431 containerd[1473]: time="2024-12-13T01:17:24.598318527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khhz4,Uid:cd58a079-81e1-405b-a4eb-cd5045926aa5,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:24.625254 containerd[1473]: time="2024-12-13T01:17:24.623833719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:24.625254 containerd[1473]: time="2024-12-13T01:17:24.623893341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:24.625254 containerd[1473]: time="2024-12-13T01:17:24.623917217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.625254 containerd[1473]: time="2024-12-13T01:17:24.624068692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.626475 containerd[1473]: time="2024-12-13T01:17:24.626396332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:24.626475 containerd[1473]: time="2024-12-13T01:17:24.626447028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:24.626566 containerd[1473]: time="2024-12-13T01:17:24.626461024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.626566 containerd[1473]: time="2024-12-13T01:17:24.626542508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:24.646288 systemd[1]: Started cri-containerd-8813f2e9de1032d389e761fcf2807d6f1081de6a23180cdfa82c09ef3fc66322.scope - libcontainer container 8813f2e9de1032d389e761fcf2807d6f1081de6a23180cdfa82c09ef3fc66322. Dec 13 01:17:24.650298 systemd[1]: Started cri-containerd-7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0.scope - libcontainer container 7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0. Dec 13 01:17:24.670729 containerd[1473]: time="2024-12-13T01:17:24.670660363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx6cg,Uid:861229fd-7522-4fb6-b500-4b11b21c7bbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8813f2e9de1032d389e761fcf2807d6f1081de6a23180cdfa82c09ef3fc66322\"" Dec 13 01:17:24.671279 kubelet[2609]: E1213 01:17:24.671248 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.674360 containerd[1473]: time="2024-12-13T01:17:24.674325926Z" level=info msg="CreateContainer within sandbox \"8813f2e9de1032d389e761fcf2807d6f1081de6a23180cdfa82c09ef3fc66322\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:24.676290 containerd[1473]: time="2024-12-13T01:17:24.676210360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khhz4,Uid:cd58a079-81e1-405b-a4eb-cd5045926aa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\"" Dec 13 01:17:24.677001 kubelet[2609]: E1213 01:17:24.676977 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.695869 containerd[1473]: time="2024-12-13T01:17:24.695815474Z" level=info msg="CreateContainer within sandbox \"8813f2e9de1032d389e761fcf2807d6f1081de6a23180cdfa82c09ef3fc66322\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17134f3a37b7f7b10b19ddaad492973505908013a3aaac260c534647cad8344a\"" Dec 13 01:17:24.696510 containerd[1473]: time="2024-12-13T01:17:24.696458447Z" level=info msg="StartContainer for \"17134f3a37b7f7b10b19ddaad492973505908013a3aaac260c534647cad8344a\"" Dec 13 01:17:24.723309 systemd[1]: Started cri-containerd-17134f3a37b7f7b10b19ddaad492973505908013a3aaac260c534647cad8344a.scope - libcontainer container 17134f3a37b7f7b10b19ddaad492973505908013a3aaac260c534647cad8344a. Dec 13 01:17:24.751251 containerd[1473]: time="2024-12-13T01:17:24.751207211Z" level=info msg="StartContainer for \"17134f3a37b7f7b10b19ddaad492973505908013a3aaac260c534647cad8344a\" returns successfully" Dec 13 01:17:24.859344 kubelet[2609]: E1213 01:17:24.859251 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:24.866820 kubelet[2609]: I1213 01:17:24.866767 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xx6cg" podStartSLOduration=1.8667235720000002 podStartE2EDuration="1.866723572s" podCreationTimestamp="2024-12-13 01:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:24.866702072 +0000 UTC m=+16.422170614" watchObservedRunningTime="2024-12-13 01:17:24.866723572 +0000 UTC m=+16.422192114" Dec 13 01:17:25.570002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657096303.mount: Deactivated successfully. Dec 13 01:17:27.311554 containerd[1473]: time="2024-12-13T01:17:27.311507396Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:27.312233 containerd[1473]: time="2024-12-13T01:17:27.312195112Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907153" Dec 13 01:17:27.313333 containerd[1473]: time="2024-12-13T01:17:27.313307738Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:27.314595 containerd[1473]: time="2024-12-13T01:17:27.314556821Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.976427614s" Dec 13 01:17:27.314635 containerd[1473]: time="2024-12-13T01:17:27.314596417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:17:27.315110 containerd[1473]: time="2024-12-13T01:17:27.315083855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:17:27.316163 containerd[1473]: time="2024-12-13T01:17:27.316137440Z" level=info msg="CreateContainer within sandbox \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:17:27.330486 containerd[1473]: time="2024-12-13T01:17:27.330443654Z" level=info msg="CreateContainer within sandbox \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\"" Dec 13 01:17:27.330998 containerd[1473]: time="2024-12-13T01:17:27.330958644Z" level=info msg="StartContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\"" Dec 13 01:17:27.365331 systemd[1]: Started cri-containerd-6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75.scope - libcontainer container 6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75. Dec 13 01:17:27.390655 containerd[1473]: time="2024-12-13T01:17:27.390608962Z" level=info msg="StartContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" returns successfully" Dec 13 01:17:27.880334 kubelet[2609]: E1213 01:17:27.880303 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:27.897559 kubelet[2609]: I1213 01:17:27.897507 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qgg9k" podStartSLOduration=1.920243892 podStartE2EDuration="4.897465031s" podCreationTimestamp="2024-12-13 01:17:23 +0000 UTC" firstStartedPulling="2024-12-13 01:17:24.337684308 +0000 UTC m=+15.893152851" lastFinishedPulling="2024-12-13 01:17:27.314905448 +0000 UTC m=+18.870373990" observedRunningTime="2024-12-13 01:17:27.897378258 +0000 UTC m=+19.452846800" watchObservedRunningTime="2024-12-13 01:17:27.897465031 +0000 UTC m=+19.452933563" Dec 13 01:17:28.878224 kubelet[2609]: E1213 01:17:28.878194 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:34.160598 systemd[1]: Started sshd@9-10.0.0.143:22-10.0.0.1:35548.service - OpenSSH per-connection server daemon (10.0.0.1:35548). Dec 13 01:17:34.195240 sshd[3043]: Accepted publickey for core from 10.0.0.1 port 35548 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:34.196747 sshd[3043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:34.201626 systemd-logind[1449]: New session 10 of user core. Dec 13 01:17:34.217307 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:17:34.348587 sshd[3043]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:34.353112 systemd[1]: sshd@9-10.0.0.143:22-10.0.0.1:35548.service: Deactivated successfully. Dec 13 01:17:34.355886 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:17:34.356842 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:17:34.357895 systemd-logind[1449]: Removed session 10. Dec 13 01:17:39.237964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983136898.mount: Deactivated successfully. Dec 13 01:17:39.361260 systemd[1]: Started sshd@10-10.0.0.143:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Dec 13 01:17:39.609128 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:39.610788 sshd[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:39.614832 systemd-logind[1449]: New session 11 of user core. Dec 13 01:17:39.627322 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:17:39.748801 sshd[3064]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:39.753209 systemd[1]: sshd@10-10.0.0.143:22-10.0.0.1:52556.service: Deactivated successfully. Dec 13 01:17:39.755270 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:17:39.755884 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:17:39.756733 systemd-logind[1449]: Removed session 11. Dec 13 01:17:41.773520 containerd[1473]: time="2024-12-13T01:17:41.773464960Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:41.774210 containerd[1473]: time="2024-12-13T01:17:41.774117906Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734747" Dec 13 01:17:41.775475 containerd[1473]: time="2024-12-13T01:17:41.775447214Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:41.776903 containerd[1473]: time="2024-12-13T01:17:41.776866611Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.461755065s" Dec 13 01:17:41.776903 containerd[1473]: time="2024-12-13T01:17:41.776894273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:17:41.780581 containerd[1473]: time="2024-12-13T01:17:41.780544832Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:17:41.794415 containerd[1473]: time="2024-12-13T01:17:41.794364929Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\"" Dec 13 01:17:41.795057 containerd[1473]: time="2024-12-13T01:17:41.794856252Z" level=info msg="StartContainer for \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\"" Dec 13 01:17:41.827521 systemd[1]: Started cri-containerd-4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c.scope - libcontainer container 4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c. Dec 13 01:17:41.855679 containerd[1473]: time="2024-12-13T01:17:41.855634788Z" level=info msg="StartContainer for \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\" returns successfully" Dec 13 01:17:41.867342 systemd[1]: cri-containerd-4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c.scope: Deactivated successfully. Dec 13 01:17:41.899290 kubelet[2609]: E1213 01:17:41.899259 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.725231 containerd[1473]: time="2024-12-13T01:17:42.725104035Z" level=info msg="shim disconnected" id=4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c namespace=k8s.io Dec 13 01:17:42.725231 containerd[1473]: time="2024-12-13T01:17:42.725169177Z" level=warning msg="cleaning up after shim disconnected" id=4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c namespace=k8s.io Dec 13 01:17:42.725231 containerd[1473]: time="2024-12-13T01:17:42.725216225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:42.791056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c-rootfs.mount: Deactivated successfully. Dec 13 01:17:42.902298 kubelet[2609]: E1213 01:17:42.902266 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:42.905020 containerd[1473]: time="2024-12-13T01:17:42.904982097Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:17:42.920573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719636511.mount: Deactivated successfully. Dec 13 01:17:42.922273 containerd[1473]: time="2024-12-13T01:17:42.922234990Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\"" Dec 13 01:17:42.923049 containerd[1473]: time="2024-12-13T01:17:42.922978357Z" level=info msg="StartContainer for \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\"" Dec 13 01:17:42.959421 systemd[1]: Started cri-containerd-02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd.scope - libcontainer container 02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd. Dec 13 01:17:42.984676 containerd[1473]: time="2024-12-13T01:17:42.984564317Z" level=info msg="StartContainer for \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\" returns successfully" Dec 13 01:17:42.995696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:17:42.995935 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:42.996003 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:43.004481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:43.004745 systemd[1]: cri-containerd-02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd.scope: Deactivated successfully. Dec 13 01:17:43.021243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:43.025631 containerd[1473]: time="2024-12-13T01:17:43.025574066Z" level=info msg="shim disconnected" id=02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd namespace=k8s.io Dec 13 01:17:43.025631 containerd[1473]: time="2024-12-13T01:17:43.025623760Z" level=warning msg="cleaning up after shim disconnected" id=02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd namespace=k8s.io Dec 13 01:17:43.025631 containerd[1473]: time="2024-12-13T01:17:43.025634269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:43.791307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd-rootfs.mount: Deactivated successfully. Dec 13 01:17:43.904933 kubelet[2609]: E1213 01:17:43.904886 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:43.906640 containerd[1473]: time="2024-12-13T01:17:43.906602694Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:17:44.186912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661915737.mount: Deactivated successfully. Dec 13 01:17:44.323441 containerd[1473]: time="2024-12-13T01:17:44.323394693Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\"" Dec 13 01:17:44.323825 containerd[1473]: time="2024-12-13T01:17:44.323778574Z" level=info msg="StartContainer for \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\"" Dec 13 01:17:44.355317 systemd[1]: Started cri-containerd-464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0.scope - libcontainer container 464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0. Dec 13 01:17:44.382238 containerd[1473]: time="2024-12-13T01:17:44.382134423Z" level=info msg="StartContainer for \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\" returns successfully" Dec 13 01:17:44.383199 systemd[1]: cri-containerd-464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0.scope: Deactivated successfully. Dec 13 01:17:44.410297 containerd[1473]: time="2024-12-13T01:17:44.410216568Z" level=info msg="shim disconnected" id=464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0 namespace=k8s.io Dec 13 01:17:44.410297 containerd[1473]: time="2024-12-13T01:17:44.410278014Z" level=warning msg="cleaning up after shim disconnected" id=464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0 namespace=k8s.io Dec 13 01:17:44.410297 containerd[1473]: time="2024-12-13T01:17:44.410289946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:44.760000 systemd[1]: Started sshd@11-10.0.0.143:22-10.0.0.1:52564.service - OpenSSH per-connection server daemon (10.0.0.1:52564). Dec 13 01:17:44.791524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0-rootfs.mount: Deactivated successfully. Dec 13 01:17:44.800783 sshd[3285]: Accepted publickey for core from 10.0.0.1 port 52564 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:44.802599 sshd[3285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:44.807234 systemd-logind[1449]: New session 12 of user core. Dec 13 01:17:44.818428 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:17:44.909679 kubelet[2609]: E1213 01:17:44.909646 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:44.912051 containerd[1473]: time="2024-12-13T01:17:44.912014825Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:17:44.941075 sshd[3285]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:44.945562 systemd[1]: sshd@11-10.0.0.143:22-10.0.0.1:52564.service: Deactivated successfully. Dec 13 01:17:44.947627 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:17:44.948565 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:17:44.949528 systemd-logind[1449]: Removed session 12. Dec 13 01:17:44.980393 containerd[1473]: time="2024-12-13T01:17:44.980333256Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\"" Dec 13 01:17:44.981002 containerd[1473]: time="2024-12-13T01:17:44.980947470Z" level=info msg="StartContainer for \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\"" Dec 13 01:17:45.010331 systemd[1]: Started cri-containerd-58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b.scope - libcontainer container 58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b. Dec 13 01:17:45.035810 systemd[1]: cri-containerd-58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b.scope: Deactivated successfully. Dec 13 01:17:45.039599 containerd[1473]: time="2024-12-13T01:17:45.039556436Z" level=info msg="StartContainer for \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\" returns successfully" Dec 13 01:17:45.066284 containerd[1473]: time="2024-12-13T01:17:45.066166342Z" level=info msg="shim disconnected" id=58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b namespace=k8s.io Dec 13 01:17:45.066284 containerd[1473]: time="2024-12-13T01:17:45.066252084Z" level=warning msg="cleaning up after shim disconnected" id=58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b namespace=k8s.io Dec 13 01:17:45.066284 containerd[1473]: time="2024-12-13T01:17:45.066266240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:45.791831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b-rootfs.mount: Deactivated successfully. Dec 13 01:17:45.912981 kubelet[2609]: E1213 01:17:45.912947 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:45.915010 containerd[1473]: time="2024-12-13T01:17:45.914973161Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:17:45.932286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261326997.mount: Deactivated successfully. Dec 13 01:17:45.933369 containerd[1473]: time="2024-12-13T01:17:45.933313978Z" level=info msg="CreateContainer within sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\"" Dec 13 01:17:45.933837 containerd[1473]: time="2024-12-13T01:17:45.933806112Z" level=info msg="StartContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\"" Dec 13 01:17:45.966336 systemd[1]: Started cri-containerd-18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2.scope - libcontainer container 18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2. Dec 13 01:17:45.996274 containerd[1473]: time="2024-12-13T01:17:45.996231202Z" level=info msg="StartContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" returns successfully" Dec 13 01:17:46.107244 kubelet[2609]: I1213 01:17:46.105889 2609 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:46.123281 kubelet[2609]: I1213 01:17:46.123243 2609 topology_manager.go:215] "Topology Admit Handler" podUID="f8106feb-4377-4c97-9b9f-ba92737f1a17" podNamespace="kube-system" podName="coredns-76f75df574-w7m7l" Dec 13 01:17:46.125328 kubelet[2609]: I1213 01:17:46.124986 2609 topology_manager.go:215] "Topology Admit Handler" podUID="016f1be5-4975-4b25-a6f4-34ff36ac4a71" podNamespace="kube-system" podName="coredns-76f75df574-cntjh" Dec 13 01:17:46.132723 systemd[1]: Created slice kubepods-burstable-podf8106feb_4377_4c97_9b9f_ba92737f1a17.slice - libcontainer container kubepods-burstable-podf8106feb_4377_4c97_9b9f_ba92737f1a17.slice. Dec 13 01:17:46.140210 systemd[1]: Created slice kubepods-burstable-pod016f1be5_4975_4b25_a6f4_34ff36ac4a71.slice - libcontainer container kubepods-burstable-pod016f1be5_4975_4b25_a6f4_34ff36ac4a71.slice. Dec 13 01:17:46.175236 kubelet[2609]: I1213 01:17:46.175197 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8106feb-4377-4c97-9b9f-ba92737f1a17-config-volume\") pod \"coredns-76f75df574-w7m7l\" (UID: \"f8106feb-4377-4c97-9b9f-ba92737f1a17\") " pod="kube-system/coredns-76f75df574-w7m7l" Dec 13 01:17:46.175236 kubelet[2609]: I1213 01:17:46.175236 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqql9\" (UniqueName: \"kubernetes.io/projected/016f1be5-4975-4b25-a6f4-34ff36ac4a71-kube-api-access-rqql9\") pod \"coredns-76f75df574-cntjh\" (UID: \"016f1be5-4975-4b25-a6f4-34ff36ac4a71\") " pod="kube-system/coredns-76f75df574-cntjh" Dec 13 01:17:46.175236 kubelet[2609]: I1213 01:17:46.175257 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bw6\" (UniqueName: \"kubernetes.io/projected/f8106feb-4377-4c97-9b9f-ba92737f1a17-kube-api-access-t6bw6\") pod \"coredns-76f75df574-w7m7l\" (UID: \"f8106feb-4377-4c97-9b9f-ba92737f1a17\") " pod="kube-system/coredns-76f75df574-w7m7l" Dec 13 01:17:46.175429 kubelet[2609]: I1213 01:17:46.175277 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/016f1be5-4975-4b25-a6f4-34ff36ac4a71-config-volume\") pod \"coredns-76f75df574-cntjh\" (UID: \"016f1be5-4975-4b25-a6f4-34ff36ac4a71\") " pod="kube-system/coredns-76f75df574-cntjh" Dec 13 01:17:46.436978 kubelet[2609]: E1213 01:17:46.436947 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:46.437632 containerd[1473]: time="2024-12-13T01:17:46.437591484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w7m7l,Uid:f8106feb-4377-4c97-9b9f-ba92737f1a17,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:46.444088 kubelet[2609]: E1213 01:17:46.444066 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:46.444570 containerd[1473]: time="2024-12-13T01:17:46.444519724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cntjh,Uid:016f1be5-4975-4b25-a6f4-34ff36ac4a71,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:46.795529 systemd[1]: run-containerd-runc-k8s.io-18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2-runc.ztEtni.mount: Deactivated successfully. Dec 13 01:17:46.917270 kubelet[2609]: E1213 01:17:46.917204 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:47.919116 kubelet[2609]: E1213 01:17:47.919074 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:48.167797 systemd-networkd[1403]: cilium_host: Link UP Dec 13 01:17:48.168748 systemd-networkd[1403]: cilium_net: Link UP Dec 13 01:17:48.168980 systemd-networkd[1403]: cilium_net: Gained carrier Dec 13 01:17:48.170295 systemd-networkd[1403]: cilium_host: Gained carrier Dec 13 01:17:48.265576 systemd-networkd[1403]: cilium_vxlan: Link UP Dec 13 01:17:48.265588 systemd-networkd[1403]: cilium_vxlan: Gained carrier Dec 13 01:17:48.339315 systemd-networkd[1403]: cilium_host: Gained IPv6LL Dec 13 01:17:48.488202 kernel: NET: Registered PF_ALG protocol family Dec 13 01:17:48.490371 systemd-networkd[1403]: cilium_net: Gained IPv6LL Dec 13 01:17:48.920989 kubelet[2609]: E1213 01:17:48.920954 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:49.133521 systemd-networkd[1403]: lxc_health: Link UP Dec 13 01:17:49.144493 systemd-networkd[1403]: lxc_health: Gained carrier Dec 13 01:17:49.526318 systemd-networkd[1403]: lxc8f053389d7dc: Link UP Dec 13 01:17:49.526534 systemd-networkd[1403]: lxc9f10e24a6d4e: Link UP Dec 13 01:17:49.536195 kernel: eth0: renamed from tmpa5c22 Dec 13 01:17:49.552595 systemd-networkd[1403]: lxc8f053389d7dc: Gained carrier Dec 13 01:17:49.554313 kernel: eth0: renamed from tmp5f14d Dec 13 01:17:49.564281 systemd-networkd[1403]: lxc9f10e24a6d4e: Gained carrier Dec 13 01:17:49.713324 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Dec 13 01:17:49.955301 systemd[1]: Started sshd@12-10.0.0.143:22-10.0.0.1:55024.service - OpenSSH per-connection server daemon (10.0.0.1:55024). Dec 13 01:17:49.997804 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 55024 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:49.999617 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:50.003825 systemd-logind[1449]: New session 13 of user core. Dec 13 01:17:50.010301 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:17:50.124967 sshd[3867]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:50.132979 systemd[1]: sshd@12-10.0.0.143:22-10.0.0.1:55024.service: Deactivated successfully. Dec 13 01:17:50.134862 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:17:50.135623 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:17:50.143519 systemd[1]: Started sshd@13-10.0.0.143:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). Dec 13 01:17:50.144529 systemd-logind[1449]: Removed session 13. Dec 13 01:17:50.177444 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:50.178804 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:50.183265 systemd-logind[1449]: New session 14 of user core. Dec 13 01:17:50.190306 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:17:50.335451 sshd[3883]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:50.349623 systemd[1]: sshd@13-10.0.0.143:22-10.0.0.1:55032.service: Deactivated successfully. Dec 13 01:17:50.351616 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:17:50.354637 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:17:50.362473 systemd[1]: Started sshd@14-10.0.0.143:22-10.0.0.1:55038.service - OpenSSH per-connection server daemon (10.0.0.1:55038). Dec 13 01:17:50.364023 systemd-logind[1449]: Removed session 14. Dec 13 01:17:50.395573 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 55038 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:50.397350 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:50.401596 systemd-logind[1449]: New session 15 of user core. Dec 13 01:17:50.414304 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:17:50.530797 sshd[3897]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:50.535659 systemd[1]: sshd@14-10.0.0.143:22-10.0.0.1:55038.service: Deactivated successfully. Dec 13 01:17:50.537750 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:17:50.539205 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:17:50.540501 systemd-logind[1449]: Removed session 15. Dec 13 01:17:50.600090 kubelet[2609]: E1213 01:17:50.599855 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:50.612235 kubelet[2609]: I1213 01:17:50.612202 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-khhz4" podStartSLOduration=10.512469244 podStartE2EDuration="27.612160584s" podCreationTimestamp="2024-12-13 01:17:23 +0000 UTC" firstStartedPulling="2024-12-13 01:17:24.677473822 +0000 UTC m=+16.232942364" lastFinishedPulling="2024-12-13 01:17:41.777165162 +0000 UTC m=+33.332633704" observedRunningTime="2024-12-13 01:17:46.928157198 +0000 UTC m=+38.483625740" watchObservedRunningTime="2024-12-13 01:17:50.612160584 +0000 UTC m=+42.167629126" Dec 13 01:17:50.804281 systemd-networkd[1403]: lxc_health: Gained IPv6LL Dec 13 01:17:50.865320 systemd-networkd[1403]: lxc9f10e24a6d4e: Gained IPv6LL Dec 13 01:17:50.924461 kubelet[2609]: E1213 01:17:50.924416 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:51.569365 systemd-networkd[1403]: lxc8f053389d7dc: Gained IPv6LL Dec 13 01:17:51.926251 kubelet[2609]: E1213 01:17:51.926212 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:52.836973 containerd[1473]: time="2024-12-13T01:17:52.836863882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:52.836973 containerd[1473]: time="2024-12-13T01:17:52.836944533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:52.836973 containerd[1473]: time="2024-12-13T01:17:52.836964110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:52.837624 containerd[1473]: time="2024-12-13T01:17:52.837065691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:52.837624 containerd[1473]: time="2024-12-13T01:17:52.837432289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:52.837624 containerd[1473]: time="2024-12-13T01:17:52.837504344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:52.838646 containerd[1473]: time="2024-12-13T01:17:52.837536815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:52.838851 containerd[1473]: time="2024-12-13T01:17:52.838770230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:52.865328 systemd[1]: Started cri-containerd-5f14d4c024099b0d785974abeef0e55253908a8abb0a2ff46a563957d7e2dacd.scope - libcontainer container 5f14d4c024099b0d785974abeef0e55253908a8abb0a2ff46a563957d7e2dacd. Dec 13 01:17:52.866954 systemd[1]: Started cri-containerd-a5c22730fc07c2034be5c097a1d6e14a07ccd01b717075004ef0d813da660a89.scope - libcontainer container a5c22730fc07c2034be5c097a1d6e14a07ccd01b717075004ef0d813da660a89. Dec 13 01:17:52.878839 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:17:52.880002 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:17:52.906169 containerd[1473]: time="2024-12-13T01:17:52.906111938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cntjh,Uid:016f1be5-4975-4b25-a6f4-34ff36ac4a71,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5c22730fc07c2034be5c097a1d6e14a07ccd01b717075004ef0d813da660a89\"" Dec 13 01:17:52.908162 kubelet[2609]: E1213 01:17:52.908092 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:52.910794 containerd[1473]: time="2024-12-13T01:17:52.910764681Z" level=info msg="CreateContainer within sandbox \"a5c22730fc07c2034be5c097a1d6e14a07ccd01b717075004ef0d813da660a89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:52.912660 containerd[1473]: time="2024-12-13T01:17:52.912626376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w7m7l,Uid:f8106feb-4377-4c97-9b9f-ba92737f1a17,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f14d4c024099b0d785974abeef0e55253908a8abb0a2ff46a563957d7e2dacd\"" Dec 13 01:17:52.913203 kubelet[2609]: E1213 01:17:52.913167 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:52.915914 containerd[1473]: time="2024-12-13T01:17:52.915891234Z" level=info msg="CreateContainer within sandbox \"5f14d4c024099b0d785974abeef0e55253908a8abb0a2ff46a563957d7e2dacd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:52.943705 containerd[1473]: time="2024-12-13T01:17:52.943663605Z" level=info msg="CreateContainer within sandbox \"5f14d4c024099b0d785974abeef0e55253908a8abb0a2ff46a563957d7e2dacd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"474aeaf9899672e3479ae15beffa1f7c760e5385f29543c302705d44465f7fc0\"" Dec 13 01:17:52.944723 containerd[1473]: time="2024-12-13T01:17:52.944002612Z" level=info msg="StartContainer for \"474aeaf9899672e3479ae15beffa1f7c760e5385f29543c302705d44465f7fc0\"" Dec 13 01:17:52.945360 containerd[1473]: time="2024-12-13T01:17:52.945339571Z" level=info msg="CreateContainer within sandbox \"a5c22730fc07c2034be5c097a1d6e14a07ccd01b717075004ef0d813da660a89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4371ca7247a3d2fb1014b239f829c6874f7ee2809cc48e31e9de35b8471421e\"" Dec 13 01:17:52.946318 containerd[1473]: time="2024-12-13T01:17:52.945676084Z" level=info msg="StartContainer for \"e4371ca7247a3d2fb1014b239f829c6874f7ee2809cc48e31e9de35b8471421e\"" Dec 13 01:17:52.968343 systemd[1]: Started cri-containerd-474aeaf9899672e3479ae15beffa1f7c760e5385f29543c302705d44465f7fc0.scope - libcontainer container 474aeaf9899672e3479ae15beffa1f7c760e5385f29543c302705d44465f7fc0. Dec 13 01:17:52.971202 systemd[1]: Started cri-containerd-e4371ca7247a3d2fb1014b239f829c6874f7ee2809cc48e31e9de35b8471421e.scope - libcontainer container e4371ca7247a3d2fb1014b239f829c6874f7ee2809cc48e31e9de35b8471421e. Dec 13 01:17:52.997555 containerd[1473]: time="2024-12-13T01:17:52.997059976Z" level=info msg="StartContainer for \"474aeaf9899672e3479ae15beffa1f7c760e5385f29543c302705d44465f7fc0\" returns successfully" Dec 13 01:17:53.001447 containerd[1473]: time="2024-12-13T01:17:53.001408949Z" level=info msg="StartContainer for \"e4371ca7247a3d2fb1014b239f829c6874f7ee2809cc48e31e9de35b8471421e\" returns successfully" Dec 13 01:17:53.843344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810352830.mount: Deactivated successfully. Dec 13 01:17:53.934712 kubelet[2609]: E1213 01:17:53.934448 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:53.936564 kubelet[2609]: E1213 01:17:53.936431 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:53.953027 kubelet[2609]: I1213 01:17:53.952933 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-w7m7l" podStartSLOduration=30.952878529 podStartE2EDuration="30.952878529s" podCreationTimestamp="2024-12-13 01:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:53.952437652 +0000 UTC m=+45.507906194" watchObservedRunningTime="2024-12-13 01:17:53.952878529 +0000 UTC m=+45.508347071" Dec 13 01:17:53.954229 kubelet[2609]: I1213 01:17:53.953115 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cntjh" podStartSLOduration=30.952990018 podStartE2EDuration="30.952990018s" podCreationTimestamp="2024-12-13 01:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:53.945008036 +0000 UTC m=+45.500476578" watchObservedRunningTime="2024-12-13 01:17:53.952990018 +0000 UTC m=+45.508458560" Dec 13 01:17:54.937856 kubelet[2609]: E1213 01:17:54.937820 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:54.938291 kubelet[2609]: E1213 01:17:54.937921 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:55.543498 systemd[1]: Started sshd@15-10.0.0.143:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). Dec 13 01:17:55.585469 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:17:55.587247 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:55.591346 systemd-logind[1449]: New session 16 of user core. Dec 13 01:17:55.602317 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:17:55.713149 sshd[4088]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:55.717326 systemd[1]: sshd@15-10.0.0.143:22-10.0.0.1:55040.service: Deactivated successfully. Dec 13 01:17:55.719464 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:17:55.720066 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:17:55.720904 systemd-logind[1449]: Removed session 16. Dec 13 01:17:55.940142 kubelet[2609]: E1213 01:17:55.940107 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:17:55.940549 kubelet[2609]: E1213 01:17:55.940232 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:00.723863 systemd[1]: Started sshd@16-10.0.0.143:22-10.0.0.1:50486.service - OpenSSH per-connection server daemon (10.0.0.1:50486). Dec 13 01:18:00.761341 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 50486 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:00.762746 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:00.766477 systemd-logind[1449]: New session 17 of user core. Dec 13 01:18:00.786308 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:00.888959 sshd[4103]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:00.901986 systemd[1]: sshd@16-10.0.0.143:22-10.0.0.1:50486.service: Deactivated successfully. Dec 13 01:18:00.903787 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:00.905389 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:00.911512 systemd[1]: Started sshd@17-10.0.0.143:22-10.0.0.1:50488.service - OpenSSH per-connection server daemon (10.0.0.1:50488). Dec 13 01:18:00.912496 systemd-logind[1449]: Removed session 17. Dec 13 01:18:00.944386 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 50488 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:00.945894 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:00.949714 systemd-logind[1449]: New session 18 of user core. Dec 13 01:18:00.956304 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:01.134440 sshd[4117]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:01.146025 systemd[1]: sshd@17-10.0.0.143:22-10.0.0.1:50488.service: Deactivated successfully. Dec 13 01:18:01.147965 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:01.149685 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:01.151026 systemd[1]: Started sshd@18-10.0.0.143:22-10.0.0.1:50504.service - OpenSSH per-connection server daemon (10.0.0.1:50504). Dec 13 01:18:01.151979 systemd-logind[1449]: Removed session 18. Dec 13 01:18:01.192506 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 50504 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:01.193921 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:01.197819 systemd-logind[1449]: New session 19 of user core. Dec 13 01:18:01.207300 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:02.518250 sshd[4129]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:02.528439 systemd[1]: sshd@18-10.0.0.143:22-10.0.0.1:50504.service: Deactivated successfully. Dec 13 01:18:02.530918 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:02.533623 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:02.539602 systemd[1]: Started sshd@19-10.0.0.143:22-10.0.0.1:50506.service - OpenSSH per-connection server daemon (10.0.0.1:50506). Dec 13 01:18:02.540859 systemd-logind[1449]: Removed session 19. Dec 13 01:18:02.572899 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 50506 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:02.574484 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:02.578458 systemd-logind[1449]: New session 20 of user core. Dec 13 01:18:02.585303 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:02.804724 sshd[4153]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:02.813304 systemd[1]: sshd@19-10.0.0.143:22-10.0.0.1:50506.service: Deactivated successfully. Dec 13 01:18:02.815232 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:02.816931 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:02.821427 systemd[1]: Started sshd@20-10.0.0.143:22-10.0.0.1:50520.service - OpenSSH per-connection server daemon (10.0.0.1:50520). Dec 13 01:18:02.822364 systemd-logind[1449]: Removed session 20. Dec 13 01:18:02.855998 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 50520 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:02.857562 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:02.861711 systemd-logind[1449]: New session 21 of user core. Dec 13 01:18:02.875322 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:02.986979 sshd[4165]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:02.990956 systemd[1]: sshd@20-10.0.0.143:22-10.0.0.1:50520.service: Deactivated successfully. Dec 13 01:18:02.992914 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:02.993465 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:02.994244 systemd-logind[1449]: Removed session 21. Dec 13 01:18:08.002860 systemd[1]: Started sshd@21-10.0.0.143:22-10.0.0.1:35118.service - OpenSSH per-connection server daemon (10.0.0.1:35118). Dec 13 01:18:08.041485 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 35118 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:08.043254 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:08.046773 systemd-logind[1449]: New session 22 of user core. Dec 13 01:18:08.063312 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:18:08.168897 sshd[4179]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:08.173840 systemd[1]: sshd@21-10.0.0.143:22-10.0.0.1:35118.service: Deactivated successfully. Dec 13 01:18:08.175640 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:18:08.176291 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:18:08.177030 systemd-logind[1449]: Removed session 22. Dec 13 01:18:13.180678 systemd[1]: Started sshd@22-10.0.0.143:22-10.0.0.1:35120.service - OpenSSH per-connection server daemon (10.0.0.1:35120). Dec 13 01:18:13.217172 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 35120 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:13.218548 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:13.221948 systemd-logind[1449]: New session 23 of user core. Dec 13 01:18:13.229291 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:18:13.327586 sshd[4198]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:13.330949 systemd[1]: sshd@22-10.0.0.143:22-10.0.0.1:35120.service: Deactivated successfully. Dec 13 01:18:13.332735 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:18:13.333391 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:18:13.334240 systemd-logind[1449]: Removed session 23. Dec 13 01:18:18.341703 systemd[1]: Started sshd@23-10.0.0.143:22-10.0.0.1:35600.service - OpenSSH per-connection server daemon (10.0.0.1:35600). Dec 13 01:18:18.378344 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 35600 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:18.379644 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:18.383249 systemd-logind[1449]: New session 24 of user core. Dec 13 01:18:18.394339 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:18:18.492836 sshd[4212]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:18.496363 systemd[1]: sshd@23-10.0.0.143:22-10.0.0.1:35600.service: Deactivated successfully. Dec 13 01:18:18.498408 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:18:18.499026 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:18:18.499766 systemd-logind[1449]: Removed session 24. Dec 13 01:18:20.537585 kubelet[2609]: E1213 01:18:20.537538 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:22.537658 kubelet[2609]: E1213 01:18:22.537613 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:23.504715 systemd[1]: Started sshd@24-10.0.0.143:22-10.0.0.1:35612.service - OpenSSH per-connection server daemon (10.0.0.1:35612). Dec 13 01:18:23.542229 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 35612 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:23.543662 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.547454 systemd-logind[1449]: New session 25 of user core. Dec 13 01:18:23.556308 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:18:23.660909 sshd[4226]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:23.670395 systemd[1]: sshd@24-10.0.0.143:22-10.0.0.1:35612.service: Deactivated successfully. Dec 13 01:18:23.672552 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:18:23.674358 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:18:23.680440 systemd[1]: Started sshd@25-10.0.0.143:22-10.0.0.1:35614.service - OpenSSH per-connection server daemon (10.0.0.1:35614). Dec 13 01:18:23.681394 systemd-logind[1449]: Removed session 25. Dec 13 01:18:23.712940 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:23.714425 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.719070 systemd-logind[1449]: New session 26 of user core. Dec 13 01:18:23.730293 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:18:25.050800 containerd[1473]: time="2024-12-13T01:18:25.050681488Z" level=info msg="StopContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" with timeout 30 (s)" Dec 13 01:18:25.051615 containerd[1473]: time="2024-12-13T01:18:25.051480969Z" level=info msg="Stop container \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" with signal terminated" Dec 13 01:18:25.078987 systemd[1]: cri-containerd-6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75.scope: Deactivated successfully. Dec 13 01:18:25.098339 containerd[1473]: time="2024-12-13T01:18:25.098112067Z" level=info msg="StopContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" with timeout 2 (s)" Dec 13 01:18:25.098339 containerd[1473]: time="2024-12-13T01:18:25.098305715Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:18:25.098470 containerd[1473]: time="2024-12-13T01:18:25.098354488Z" level=info msg="Stop container \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" with signal terminated" Dec 13 01:18:25.099095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75-rootfs.mount: Deactivated successfully. Dec 13 01:18:25.104469 systemd-networkd[1403]: lxc_health: Link DOWN Dec 13 01:18:25.104477 systemd-networkd[1403]: lxc_health: Lost carrier Dec 13 01:18:25.119636 containerd[1473]: time="2024-12-13T01:18:25.119580661Z" level=info msg="shim disconnected" id=6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75 namespace=k8s.io Dec 13 01:18:25.119636 containerd[1473]: time="2024-12-13T01:18:25.119634603Z" level=warning msg="cleaning up after shim disconnected" id=6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75 namespace=k8s.io Dec 13 01:18:25.119783 containerd[1473]: time="2024-12-13T01:18:25.119644983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:25.134545 systemd[1]: cri-containerd-18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2.scope: Deactivated successfully. Dec 13 01:18:25.134896 systemd[1]: cri-containerd-18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2.scope: Consumed 6.495s CPU time. Dec 13 01:18:25.142247 containerd[1473]: time="2024-12-13T01:18:25.142197611Z" level=info msg="StopContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" returns successfully" Dec 13 01:18:25.142862 containerd[1473]: time="2024-12-13T01:18:25.142830254Z" level=info msg="StopPodSandbox for \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\"" Dec 13 01:18:25.142904 containerd[1473]: time="2024-12-13T01:18:25.142875029Z" level=info msg="Container to stop \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.144989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac-shm.mount: Deactivated successfully. Dec 13 01:18:25.150743 systemd[1]: cri-containerd-b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac.scope: Deactivated successfully. Dec 13 01:18:25.155614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2-rootfs.mount: Deactivated successfully. Dec 13 01:18:25.162715 containerd[1473]: time="2024-12-13T01:18:25.162628670Z" level=info msg="shim disconnected" id=18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2 namespace=k8s.io Dec 13 01:18:25.162715 containerd[1473]: time="2024-12-13T01:18:25.162683844Z" level=warning msg="cleaning up after shim disconnected" id=18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2 namespace=k8s.io Dec 13 01:18:25.162715 containerd[1473]: time="2024-12-13T01:18:25.162695457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:25.170386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac-rootfs.mount: Deactivated successfully. Dec 13 01:18:25.172795 containerd[1473]: time="2024-12-13T01:18:25.172699876Z" level=info msg="shim disconnected" id=b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac namespace=k8s.io Dec 13 01:18:25.172795 containerd[1473]: time="2024-12-13T01:18:25.172751053Z" level=warning msg="cleaning up after shim disconnected" id=b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac namespace=k8s.io Dec 13 01:18:25.172795 containerd[1473]: time="2024-12-13T01:18:25.172760490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:25.181243 containerd[1473]: time="2024-12-13T01:18:25.181215099Z" level=info msg="StopContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" returns successfully" Dec 13 01:18:25.181998 containerd[1473]: time="2024-12-13T01:18:25.181921744Z" level=info msg="StopPodSandbox for \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\"" Dec 13 01:18:25.182049 containerd[1473]: time="2024-12-13T01:18:25.182006555Z" level=info msg="Container to stop \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.182049 containerd[1473]: time="2024-12-13T01:18:25.182020622Z" level=info msg="Container to stop \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.182049 containerd[1473]: time="2024-12-13T01:18:25.182029860Z" level=info msg="Container to stop \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.182121 containerd[1473]: time="2024-12-13T01:18:25.182038927Z" level=info msg="Container to stop \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.182121 containerd[1473]: time="2024-12-13T01:18:25.182066139Z" level=info msg="Container to stop \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:25.187020 containerd[1473]: time="2024-12-13T01:18:25.185557643Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:18:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:18:25.187804 systemd[1]: cri-containerd-7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0.scope: Deactivated successfully. Dec 13 01:18:25.198172 containerd[1473]: time="2024-12-13T01:18:25.198135438Z" level=info msg="TearDown network for sandbox \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\" successfully" Dec 13 01:18:25.198172 containerd[1473]: time="2024-12-13T01:18:25.198161007Z" level=info msg="StopPodSandbox for \"b615dc920adb4fa7d9b769abf50b0f0d05b739eb2df5fad2d9777c0a15143bac\" returns successfully" Dec 13 01:18:25.214687 containerd[1473]: time="2024-12-13T01:18:25.214616381Z" level=info msg="shim disconnected" id=7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0 namespace=k8s.io Dec 13 01:18:25.214687 containerd[1473]: time="2024-12-13T01:18:25.214665585Z" level=warning msg="cleaning up after shim disconnected" id=7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0 namespace=k8s.io Dec 13 01:18:25.214687 containerd[1473]: time="2024-12-13T01:18:25.214674141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:25.228528 containerd[1473]: time="2024-12-13T01:18:25.228479042Z" level=info msg="TearDown network for sandbox \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" successfully" Dec 13 01:18:25.228528 containerd[1473]: time="2024-12-13T01:18:25.228513237Z" level=info msg="StopPodSandbox for \"7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0\" returns successfully" Dec 13 01:18:25.286251 kubelet[2609]: I1213 01:18:25.286171 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtpvg\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286268 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-bpf-maps\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286292 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-hostproc\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286314 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-kernel\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286332 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-net\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286341 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.286709 kubelet[2609]: I1213 01:18:25.286355 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7gq\" (UniqueName: \"kubernetes.io/projected/3361d41b-c539-427b-a4aa-97dc5822a1c1-kube-api-access-mr7gq\") pod \"3361d41b-c539-427b-a4aa-97dc5822a1c1\" (UID: \"3361d41b-c539-427b-a4aa-97dc5822a1c1\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286430 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-cgroup\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286451 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-lib-modules\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286469 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-etc-cni-netd\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286486 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cni-path\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286509 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-config-path\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.286863 kubelet[2609]: I1213 01:18:25.286528 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-hubble-tls\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286545 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-xtables-lock\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286562 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3361d41b-c539-427b-a4aa-97dc5822a1c1-cilium-config-path\") pod \"3361d41b-c539-427b-a4aa-97dc5822a1c1\" (UID: \"3361d41b-c539-427b-a4aa-97dc5822a1c1\") " Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286581 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd58a079-81e1-405b-a4eb-cd5045926aa5-clustermesh-secrets\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286598 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-run\") pod \"cd58a079-81e1-405b-a4eb-cd5045926aa5\" (UID: \"cd58a079-81e1-405b-a4eb-cd5045926aa5\") " Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286612 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.287012 kubelet[2609]: I1213 01:18:25.286638 2609 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.287153 kubelet[2609]: I1213 01:18:25.286651 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.287153 kubelet[2609]: I1213 01:18:25.286659 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.287153 kubelet[2609]: I1213 01:18:25.286670 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.287153 kubelet[2609]: I1213 01:18:25.286681 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.288174 kubelet[2609]: I1213 01:18:25.288145 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.290500 kubelet[2609]: I1213 01:18:25.290227 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:25.290500 kubelet[2609]: I1213 01:18:25.290292 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.290500 kubelet[2609]: I1213 01:18:25.290311 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.290500 kubelet[2609]: I1213 01:18:25.290355 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:25.291049 kubelet[2609]: I1213 01:18:25.291018 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg" (OuterVolumeSpecName: "kube-api-access-rtpvg") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "kube-api-access-rtpvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:25.291483 kubelet[2609]: I1213 01:18:25.291448 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3361d41b-c539-427b-a4aa-97dc5822a1c1-kube-api-access-mr7gq" (OuterVolumeSpecName: "kube-api-access-mr7gq") pod "3361d41b-c539-427b-a4aa-97dc5822a1c1" (UID: "3361d41b-c539-427b-a4aa-97dc5822a1c1"). InnerVolumeSpecName "kube-api-access-mr7gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:25.292253 kubelet[2609]: I1213 01:18:25.292230 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd58a079-81e1-405b-a4eb-cd5045926aa5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:18:25.292353 kubelet[2609]: I1213 01:18:25.292330 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd58a079-81e1-405b-a4eb-cd5045926aa5" (UID: "cd58a079-81e1-405b-a4eb-cd5045926aa5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:25.292484 kubelet[2609]: I1213 01:18:25.292463 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3361d41b-c539-427b-a4aa-97dc5822a1c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3361d41b-c539-427b-a4aa-97dc5822a1c1" (UID: "3361d41b-c539-427b-a4aa-97dc5822a1c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387599 2609 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd58a079-81e1-405b-a4eb-cd5045926aa5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387641 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387655 2609 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rtpvg\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-kube-api-access-rtpvg\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387667 2609 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387676 2609 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387685 2609 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387694 2609 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mr7gq\" (UniqueName: \"kubernetes.io/projected/3361d41b-c539-427b-a4aa-97dc5822a1c1-kube-api-access-mr7gq\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.387735 kubelet[2609]: I1213 01:18:25.387703 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387712 2609 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387721 2609 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387730 2609 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387739 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd58a079-81e1-405b-a4eb-cd5045926aa5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387748 2609 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd58a079-81e1-405b-a4eb-cd5045926aa5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387758 2609 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd58a079-81e1-405b-a4eb-cd5045926aa5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.388290 kubelet[2609]: I1213 01:18:25.387767 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3361d41b-c539-427b-a4aa-97dc5822a1c1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:18:25.989653 kubelet[2609]: I1213 01:18:25.989556 2609 scope.go:117] "RemoveContainer" containerID="18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2" Dec 13 01:18:25.994295 containerd[1473]: time="2024-12-13T01:18:25.994218142Z" level=info msg="RemoveContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\"" Dec 13 01:18:25.995805 systemd[1]: Removed slice kubepods-burstable-podcd58a079_81e1_405b_a4eb_cd5045926aa5.slice - libcontainer container kubepods-burstable-podcd58a079_81e1_405b_a4eb_cd5045926aa5.slice. Dec 13 01:18:25.996075 systemd[1]: kubepods-burstable-podcd58a079_81e1_405b_a4eb_cd5045926aa5.slice: Consumed 6.596s CPU time. Dec 13 01:18:25.997575 systemd[1]: Removed slice kubepods-besteffort-pod3361d41b_c539_427b_a4aa_97dc5822a1c1.slice - libcontainer container kubepods-besteffort-pod3361d41b_c539_427b_a4aa_97dc5822a1c1.slice. Dec 13 01:18:26.045935 containerd[1473]: time="2024-12-13T01:18:26.045860122Z" level=info msg="RemoveContainer for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" returns successfully" Dec 13 01:18:26.046591 kubelet[2609]: I1213 01:18:26.046566 2609 scope.go:117] "RemoveContainer" containerID="58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b" Dec 13 01:18:26.047751 containerd[1473]: time="2024-12-13T01:18:26.047712386Z" level=info msg="RemoveContainer for \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\"" Dec 13 01:18:26.051072 containerd[1473]: time="2024-12-13T01:18:26.051034336Z" level=info msg="RemoveContainer for \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\" returns successfully" Dec 13 01:18:26.051391 kubelet[2609]: I1213 01:18:26.051208 2609 scope.go:117] "RemoveContainer" containerID="464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0" Dec 13 01:18:26.052091 containerd[1473]: time="2024-12-13T01:18:26.052045880Z" level=info msg="RemoveContainer for \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\"" Dec 13 01:18:26.055047 containerd[1473]: time="2024-12-13T01:18:26.055009598Z" level=info msg="RemoveContainer for \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\" returns successfully" Dec 13 01:18:26.055148 kubelet[2609]: I1213 01:18:26.055130 2609 scope.go:117] "RemoveContainer" containerID="02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd" Dec 13 01:18:26.056107 containerd[1473]: time="2024-12-13T01:18:26.056056479Z" level=info msg="RemoveContainer for \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\"" Dec 13 01:18:26.059055 containerd[1473]: time="2024-12-13T01:18:26.059023122Z" level=info msg="RemoveContainer for \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\" returns successfully" Dec 13 01:18:26.059200 kubelet[2609]: I1213 01:18:26.059153 2609 scope.go:117] "RemoveContainer" containerID="4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c" Dec 13 01:18:26.059938 containerd[1473]: time="2024-12-13T01:18:26.059906132Z" level=info msg="RemoveContainer for \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\"" Dec 13 01:18:26.062994 containerd[1473]: time="2024-12-13T01:18:26.062959079Z" level=info msg="RemoveContainer for \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\" returns successfully" Dec 13 01:18:26.063189 kubelet[2609]: I1213 01:18:26.063147 2609 scope.go:117] "RemoveContainer" containerID="18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2" Dec 13 01:18:26.066229 containerd[1473]: time="2024-12-13T01:18:26.066162943Z" level=error msg="ContainerStatus for \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\": not found" Dec 13 01:18:26.066366 kubelet[2609]: E1213 01:18:26.066338 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\": not found" containerID="18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2" Dec 13 01:18:26.066457 kubelet[2609]: I1213 01:18:26.066432 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2"} err="failed to get container status \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"18ae650b631d38152ae09317d0298f23e3465c38356cd84644ab6d48ddb4b9f2\": not found" Dec 13 01:18:26.066457 kubelet[2609]: I1213 01:18:26.066448 2609 scope.go:117] "RemoveContainer" containerID="58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b" Dec 13 01:18:26.066707 containerd[1473]: time="2024-12-13T01:18:26.066662134Z" level=error msg="ContainerStatus for \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\": not found" Dec 13 01:18:26.066846 kubelet[2609]: E1213 01:18:26.066816 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\": not found" containerID="58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b" Dec 13 01:18:26.066879 kubelet[2609]: I1213 01:18:26.066866 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b"} err="failed to get container status \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\": rpc error: code = NotFound desc = an error occurred when try to find container \"58cc01737b3072255fd9eff845215fcd6051d74e27fdb1e01a4d56c5b51f282b\": not found" Dec 13 01:18:26.066902 kubelet[2609]: I1213 01:18:26.066881 2609 scope.go:117] "RemoveContainer" containerID="464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0" Dec 13 01:18:26.067089 containerd[1473]: time="2024-12-13T01:18:26.067053878Z" level=error msg="ContainerStatus for \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\": not found" Dec 13 01:18:26.067199 kubelet[2609]: E1213 01:18:26.067168 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\": not found" containerID="464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0" Dec 13 01:18:26.067245 kubelet[2609]: I1213 01:18:26.067212 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0"} err="failed to get container status \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"464158acb148ec3734c11bd11fd8e00cd4cb6ef7a80fce58163f4cd11e9a03d0\": not found" Dec 13 01:18:26.067245 kubelet[2609]: I1213 01:18:26.067223 2609 scope.go:117] "RemoveContainer" containerID="02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd" Dec 13 01:18:26.067372 containerd[1473]: time="2024-12-13T01:18:26.067342508Z" level=error msg="ContainerStatus for \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\": not found" Dec 13 01:18:26.067466 kubelet[2609]: E1213 01:18:26.067441 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\": not found" containerID="02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd" Dec 13 01:18:26.067525 kubelet[2609]: I1213 01:18:26.067486 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd"} err="failed to get container status \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"02fee6f90a8fdd1b29f33783dfc0f603893ae1cc3f540a848d97916e28ec01dd\": not found" Dec 13 01:18:26.067525 kubelet[2609]: I1213 01:18:26.067498 2609 scope.go:117] "RemoveContainer" containerID="4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c" Dec 13 01:18:26.067662 containerd[1473]: time="2024-12-13T01:18:26.067631668Z" level=error msg="ContainerStatus for \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\": not found" Dec 13 01:18:26.067807 kubelet[2609]: E1213 01:18:26.067785 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\": not found" containerID="4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c" Dec 13 01:18:26.067847 kubelet[2609]: I1213 01:18:26.067829 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c"} err="failed to get container status \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eb545fa97d27aaea16087a3cdd31cc3e38163d65431894906dd02a323af6a8c\": not found" Dec 13 01:18:26.067847 kubelet[2609]: I1213 01:18:26.067845 2609 scope.go:117] "RemoveContainer" containerID="6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75" Dec 13 01:18:26.068791 containerd[1473]: time="2024-12-13T01:18:26.068753201Z" level=info msg="RemoveContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\"" Dec 13 01:18:26.071820 containerd[1473]: time="2024-12-13T01:18:26.071789828Z" level=info msg="RemoveContainer for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" returns successfully" Dec 13 01:18:26.071958 kubelet[2609]: I1213 01:18:26.071929 2609 scope.go:117] "RemoveContainer" containerID="6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75" Dec 13 01:18:26.072140 containerd[1473]: time="2024-12-13T01:18:26.072108584Z" level=error msg="ContainerStatus for \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\": not found" Dec 13 01:18:26.072258 kubelet[2609]: E1213 01:18:26.072240 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\": not found" containerID="6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75" Dec 13 01:18:26.072307 kubelet[2609]: I1213 01:18:26.072269 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75"} err="failed to get container status \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\": rpc error: code = NotFound desc = an error occurred when try to find container \"6991f4d1c13ba9c17e1fde54774f2dd233430bde9cc18978a2586fe50427cb75\": not found" Dec 13 01:18:26.076752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0-rootfs.mount: Deactivated successfully. Dec 13 01:18:26.076871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b5ec78f8251ab1df05cd67813135d828c69f7be2be4e10a9cd8f393e0a258c0-shm.mount: Deactivated successfully. Dec 13 01:18:26.076955 systemd[1]: var-lib-kubelet-pods-cd58a079\x2d81e1\x2d405b\x2da4eb\x2dcd5045926aa5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtpvg.mount: Deactivated successfully. Dec 13 01:18:26.077049 systemd[1]: var-lib-kubelet-pods-3361d41b\x2dc539\x2d427b\x2da4aa\x2d97dc5822a1c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmr7gq.mount: Deactivated successfully. Dec 13 01:18:26.077123 systemd[1]: var-lib-kubelet-pods-cd58a079\x2d81e1\x2d405b\x2da4eb\x2dcd5045926aa5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:18:26.077219 systemd[1]: var-lib-kubelet-pods-cd58a079\x2d81e1\x2d405b\x2da4eb\x2dcd5045926aa5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:18:26.539081 kubelet[2609]: I1213 01:18:26.539039 2609 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3361d41b-c539-427b-a4aa-97dc5822a1c1" path="/var/lib/kubelet/pods/3361d41b-c539-427b-a4aa-97dc5822a1c1/volumes" Dec 13 01:18:26.539685 kubelet[2609]: I1213 01:18:26.539660 2609 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" path="/var/lib/kubelet/pods/cd58a079-81e1-405b-a4eb-cd5045926aa5/volumes" Dec 13 01:18:27.021602 sshd[4240]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:27.029043 systemd[1]: sshd@25-10.0.0.143:22-10.0.0.1:35614.service: Deactivated successfully. Dec 13 01:18:27.030877 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:18:27.032586 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:18:27.043436 systemd[1]: Started sshd@26-10.0.0.143:22-10.0.0.1:35616.service - OpenSSH per-connection server daemon (10.0.0.1:35616). Dec 13 01:18:27.044292 systemd-logind[1449]: Removed session 26. Dec 13 01:18:27.080611 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 35616 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:27.082163 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:27.086024 systemd-logind[1449]: New session 27 of user core. Dec 13 01:18:27.098299 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:18:27.779926 sshd[4410]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:27.795700 kubelet[2609]: I1213 01:18:27.793923 2609 topology_manager.go:215] "Topology Admit Handler" podUID="3b76b7e4-52c2-4604-b16c-e6e3003068b7" podNamespace="kube-system" podName="cilium-955dx" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.793991 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="cilium-agent" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.794013 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="mount-cgroup" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.794021 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="apply-sysctl-overwrites" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.794027 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="mount-bpf-fs" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.794034 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3361d41b-c539-427b-a4aa-97dc5822a1c1" containerName="cilium-operator" Dec 13 01:18:27.795700 kubelet[2609]: E1213 01:18:27.794041 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="clean-cilium-state" Dec 13 01:18:27.795700 kubelet[2609]: I1213 01:18:27.794063 2609 memory_manager.go:354] "RemoveStaleState removing state" podUID="3361d41b-c539-427b-a4aa-97dc5822a1c1" containerName="cilium-operator" Dec 13 01:18:27.795700 kubelet[2609]: I1213 01:18:27.794069 2609 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd58a079-81e1-405b-a4eb-cd5045926aa5" containerName="cilium-agent" Dec 13 01:18:27.799155 systemd[1]: sshd@26-10.0.0.143:22-10.0.0.1:35616.service: Deactivated successfully. Dec 13 01:18:27.802192 kubelet[2609]: W1213 01:18:27.801213 2609 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:18:27.802192 kubelet[2609]: E1213 01:18:27.801251 2609 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 01:18:27.802443 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:18:27.808255 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:18:27.814532 systemd[1]: Started sshd@27-10.0.0.143:22-10.0.0.1:35626.service - OpenSSH per-connection server daemon (10.0.0.1:35626). Dec 13 01:18:27.818576 systemd-logind[1449]: Removed session 27. Dec 13 01:18:27.825108 systemd[1]: Created slice kubepods-burstable-pod3b76b7e4_52c2_4604_b16c_e6e3003068b7.slice - libcontainer container kubepods-burstable-pod3b76b7e4_52c2_4604_b16c_e6e3003068b7.slice. Dec 13 01:18:27.848539 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 35626 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:27.850063 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:27.853975 systemd-logind[1449]: New session 28 of user core. Dec 13 01:18:27.864328 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:18:27.902321 kubelet[2609]: I1213 01:18:27.902291 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-xtables-lock\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902402 kubelet[2609]: I1213 01:18:27.902330 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b76b7e4-52c2-4604-b16c-e6e3003068b7-cilium-config-path\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902402 kubelet[2609]: I1213 01:18:27.902364 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b76b7e4-52c2-4604-b16c-e6e3003068b7-cilium-ipsec-secrets\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902402 kubelet[2609]: I1213 01:18:27.902389 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-cilium-run\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902479 kubelet[2609]: I1213 01:18:27.902427 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-hostproc\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902479 kubelet[2609]: I1213 01:18:27.902456 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-cilium-cgroup\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902522 kubelet[2609]: I1213 01:18:27.902513 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-host-proc-sys-net\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902550 kubelet[2609]: I1213 01:18:27.902533 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-etc-cni-netd\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902596 kubelet[2609]: I1213 01:18:27.902581 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-lib-modules\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902677 kubelet[2609]: I1213 01:18:27.902645 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b76b7e4-52c2-4604-b16c-e6e3003068b7-clustermesh-secrets\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902740 kubelet[2609]: I1213 01:18:27.902686 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdcb\" (UniqueName: \"kubernetes.io/projected/3b76b7e4-52c2-4604-b16c-e6e3003068b7-kube-api-access-kcdcb\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902740 kubelet[2609]: I1213 01:18:27.902713 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-cni-path\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902740 kubelet[2609]: I1213 01:18:27.902732 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b76b7e4-52c2-4604-b16c-e6e3003068b7-hubble-tls\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902844 kubelet[2609]: I1213 01:18:27.902749 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-bpf-maps\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.902844 kubelet[2609]: I1213 01:18:27.902802 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b76b7e4-52c2-4604-b16c-e6e3003068b7-host-proc-sys-kernel\") pod \"cilium-955dx\" (UID: \"3b76b7e4-52c2-4604-b16c-e6e3003068b7\") " pod="kube-system/cilium-955dx" Dec 13 01:18:27.914264 sshd[4423]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:27.925866 systemd[1]: sshd@27-10.0.0.143:22-10.0.0.1:35626.service: Deactivated successfully. Dec 13 01:18:27.927666 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:18:27.929900 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:18:27.942489 systemd[1]: Started sshd@28-10.0.0.143:22-10.0.0.1:35636.service - OpenSSH per-connection server daemon (10.0.0.1:35636). Dec 13 01:18:27.943506 systemd-logind[1449]: Removed session 28. Dec 13 01:18:27.975288 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 35636 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:18:27.976758 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:27.980635 systemd-logind[1449]: New session 29 of user core. Dec 13 01:18:27.996305 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:18:28.584419 kubelet[2609]: E1213 01:18:28.584377 2609 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:29.004457 kubelet[2609]: E1213 01:18:29.004421 2609 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 01:18:29.004457 kubelet[2609]: E1213 01:18:29.004445 2609 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-955dx: failed to sync secret cache: timed out waiting for the condition Dec 13 01:18:29.004879 kubelet[2609]: E1213 01:18:29.004513 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b76b7e4-52c2-4604-b16c-e6e3003068b7-hubble-tls podName:3b76b7e4-52c2-4604-b16c-e6e3003068b7 nodeName:}" failed. No retries permitted until 2024-12-13 01:18:29.50449168 +0000 UTC m=+81.059960272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/3b76b7e4-52c2-4604-b16c-e6e3003068b7-hubble-tls") pod "cilium-955dx" (UID: "3b76b7e4-52c2-4604-b16c-e6e3003068b7") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:18:29.536658 kubelet[2609]: E1213 01:18:29.536608 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:29.627913 kubelet[2609]: E1213 01:18:29.627860 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:29.628453 containerd[1473]: time="2024-12-13T01:18:29.628411854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-955dx,Uid:3b76b7e4-52c2-4604-b16c-e6e3003068b7,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:29.647730 containerd[1473]: time="2024-12-13T01:18:29.647047676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:29.647730 containerd[1473]: time="2024-12-13T01:18:29.647699705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:29.647903 containerd[1473]: time="2024-12-13T01:18:29.647715044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:29.647903 containerd[1473]: time="2024-12-13T01:18:29.647796720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:29.663712 systemd[1]: run-containerd-runc-k8s.io-63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d-runc.xgfkD8.mount: Deactivated successfully. Dec 13 01:18:29.675321 systemd[1]: Started cri-containerd-63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d.scope - libcontainer container 63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d. Dec 13 01:18:29.697369 containerd[1473]: time="2024-12-13T01:18:29.697326337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-955dx,Uid:3b76b7e4-52c2-4604-b16c-e6e3003068b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\"" Dec 13 01:18:29.697957 kubelet[2609]: E1213 01:18:29.697939 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:29.700223 containerd[1473]: time="2024-12-13T01:18:29.700167886Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:18:29.753000 containerd[1473]: time="2024-12-13T01:18:29.752952729Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d\"" Dec 13 01:18:29.753405 containerd[1473]: time="2024-12-13T01:18:29.753383638Z" level=info msg="StartContainer for \"845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d\"" Dec 13 01:18:29.782317 systemd[1]: Started cri-containerd-845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d.scope - libcontainer container 845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d. Dec 13 01:18:29.808746 containerd[1473]: time="2024-12-13T01:18:29.808243152Z" level=info msg="StartContainer for \"845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d\" returns successfully" Dec 13 01:18:29.815806 systemd[1]: cri-containerd-845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d.scope: Deactivated successfully. Dec 13 01:18:29.844476 containerd[1473]: time="2024-12-13T01:18:29.844419295Z" level=info msg="shim disconnected" id=845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d namespace=k8s.io Dec 13 01:18:29.844476 containerd[1473]: time="2024-12-13T01:18:29.844467567Z" level=warning msg="cleaning up after shim disconnected" id=845a8cbb6cb29d56dc57fe360d1cc58e3933e7d8c615ecc0cd72e5d8f6ce3c8d namespace=k8s.io Dec 13 01:18:29.844476 containerd[1473]: time="2024-12-13T01:18:29.844476424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:29.999788 kubelet[2609]: E1213 01:18:29.999761 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:30.001510 containerd[1473]: time="2024-12-13T01:18:30.001249737Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:18:30.014215 containerd[1473]: time="2024-12-13T01:18:30.014154681Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb\"" Dec 13 01:18:30.014846 containerd[1473]: time="2024-12-13T01:18:30.014812230Z" level=info msg="StartContainer for \"e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb\"" Dec 13 01:18:30.041308 systemd[1]: Started cri-containerd-e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb.scope - libcontainer container e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb. Dec 13 01:18:30.071544 systemd[1]: cri-containerd-e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb.scope: Deactivated successfully. Dec 13 01:18:30.079386 containerd[1473]: time="2024-12-13T01:18:30.079345405Z" level=info msg="StartContainer for \"e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb\" returns successfully" Dec 13 01:18:30.155635 containerd[1473]: time="2024-12-13T01:18:30.153094338Z" level=info msg="shim disconnected" id=e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb namespace=k8s.io Dec 13 01:18:30.155635 containerd[1473]: time="2024-12-13T01:18:30.153155424Z" level=warning msg="cleaning up after shim disconnected" id=e124ac7f2b288d1ff7d0c145b8a8eadb080d58fdba9eaccf87a39bf3b42d3acb namespace=k8s.io Dec 13 01:18:30.155635 containerd[1473]: time="2024-12-13T01:18:30.153165814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:30.194587 kubelet[2609]: I1213 01:18:30.194545 2609 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:18:30Z","lastTransitionTime":"2024-12-13T01:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:18:31.002773 kubelet[2609]: E1213 01:18:31.002745 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:31.005172 containerd[1473]: time="2024-12-13T01:18:31.005125749Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:18:31.131271 containerd[1473]: time="2024-12-13T01:18:31.131218324Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6\"" Dec 13 01:18:31.131755 containerd[1473]: time="2024-12-13T01:18:31.131727190Z" level=info msg="StartContainer for \"ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6\"" Dec 13 01:18:31.178310 systemd[1]: Started cri-containerd-ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6.scope - libcontainer container ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6. Dec 13 01:18:31.205309 containerd[1473]: time="2024-12-13T01:18:31.205260808Z" level=info msg="StartContainer for \"ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6\" returns successfully" Dec 13 01:18:31.205496 systemd[1]: cri-containerd-ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6.scope: Deactivated successfully. Dec 13 01:18:31.228552 containerd[1473]: time="2024-12-13T01:18:31.228479079Z" level=info msg="shim disconnected" id=ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6 namespace=k8s.io Dec 13 01:18:31.228552 containerd[1473]: time="2024-12-13T01:18:31.228526629Z" level=warning msg="cleaning up after shim disconnected" id=ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6 namespace=k8s.io Dec 13 01:18:31.228552 containerd[1473]: time="2024-12-13T01:18:31.228534394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:31.518148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac31cc25890819199656f3777e0e29edc554be98d3b87831c923d1a230ef6fb6-rootfs.mount: Deactivated successfully. Dec 13 01:18:32.006654 kubelet[2609]: E1213 01:18:32.006597 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:32.008656 containerd[1473]: time="2024-12-13T01:18:32.008618822Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:18:32.023097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121595636.mount: Deactivated successfully. Dec 13 01:18:32.024902 containerd[1473]: time="2024-12-13T01:18:32.024868941Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528\"" Dec 13 01:18:32.025445 containerd[1473]: time="2024-12-13T01:18:32.025414517Z" level=info msg="StartContainer for \"1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528\"" Dec 13 01:18:32.058312 systemd[1]: Started cri-containerd-1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528.scope - libcontainer container 1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528. Dec 13 01:18:32.081709 systemd[1]: cri-containerd-1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528.scope: Deactivated successfully. Dec 13 01:18:32.084154 containerd[1473]: time="2024-12-13T01:18:32.084098871Z" level=info msg="StartContainer for \"1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528\" returns successfully" Dec 13 01:18:32.106094 containerd[1473]: time="2024-12-13T01:18:32.106022250Z" level=info msg="shim disconnected" id=1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528 namespace=k8s.io Dec 13 01:18:32.106094 containerd[1473]: time="2024-12-13T01:18:32.106089086Z" level=warning msg="cleaning up after shim disconnected" id=1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528 namespace=k8s.io Dec 13 01:18:32.106328 containerd[1473]: time="2024-12-13T01:18:32.106100819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:32.517659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e98401abbf306335c56c4e4abc6ca12034e1affb44f304e171763a6411e5528-rootfs.mount: Deactivated successfully. Dec 13 01:18:33.009782 kubelet[2609]: E1213 01:18:33.009756 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:33.012321 containerd[1473]: time="2024-12-13T01:18:33.012275565Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:18:33.028081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100449250.mount: Deactivated successfully. Dec 13 01:18:33.031678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803822170.mount: Deactivated successfully. Dec 13 01:18:33.035020 containerd[1473]: time="2024-12-13T01:18:33.034976938Z" level=info msg="CreateContainer within sandbox \"63c915c6d88c087bca50e402d27b75d5671a98610593d3322d53d8dc49ca953d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2\"" Dec 13 01:18:33.035574 containerd[1473]: time="2024-12-13T01:18:33.035455085Z" level=info msg="StartContainer for \"4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2\"" Dec 13 01:18:33.063328 systemd[1]: Started cri-containerd-4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2.scope - libcontainer container 4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2. Dec 13 01:18:33.092852 containerd[1473]: time="2024-12-13T01:18:33.092804863Z" level=info msg="StartContainer for \"4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2\" returns successfully" Dec 13 01:18:33.502218 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:18:34.015024 kubelet[2609]: E1213 01:18:34.014988 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:35.628892 kubelet[2609]: E1213 01:18:35.628844 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:36.473803 systemd-networkd[1403]: lxc_health: Link UP Dec 13 01:18:36.482126 systemd-networkd[1403]: lxc_health: Gained carrier Dec 13 01:18:36.537681 kubelet[2609]: E1213 01:18:36.537632 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:37.630273 kubelet[2609]: E1213 01:18:37.630240 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:37.642992 kubelet[2609]: I1213 01:18:37.642960 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-955dx" podStartSLOduration=10.642927342 podStartE2EDuration="10.642927342s" podCreationTimestamp="2024-12-13 01:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:34.026569968 +0000 UTC m=+85.582038540" watchObservedRunningTime="2024-12-13 01:18:37.642927342 +0000 UTC m=+89.198395874" Dec 13 01:18:38.022603 kubelet[2609]: E1213 01:18:38.022577 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:18:38.289330 systemd-networkd[1403]: lxc_health: Gained IPv6LL Dec 13 01:18:38.496025 systemd[1]: run-containerd-runc-k8s.io-4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2-runc.Gog0n4.mount: Deactivated successfully. Dec 13 01:18:42.666990 systemd[1]: run-containerd-runc-k8s.io-4b51015c32ff52722da6638e8be1e0010c1067d98b2164b86c254607496aa3e2-runc.Y1WlqC.mount: Deactivated successfully. Dec 13 01:18:42.715263 sshd[4431]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:42.718859 systemd[1]: sshd@28-10.0.0.143:22-10.0.0.1:35636.service: Deactivated successfully. Dec 13 01:18:42.720746 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:18:42.721319 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:18:42.722082 systemd-logind[1449]: Removed session 29.