Dec 13 13:20:02.082390 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:20:02.082418 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:20:02.082433 kernel: BIOS-provided physical RAM map: Dec 13 13:20:02.082442 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:20:02.082450 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:20:02.082459 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:20:02.082469 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 13:20:02.082478 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 13:20:02.082487 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:20:02.082499 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:20:02.082507 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:20:02.082516 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:20:02.082525 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:20:02.082534 kernel: NX (Execute Disable) protection: active Dec 13 13:20:02.082545 kernel: APIC: Static calls initialized Dec 13 13:20:02.082557 kernel: SMBIOS 2.8 present. Dec 13 13:20:02.082567 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 13:20:02.082576 kernel: Hypervisor detected: KVM Dec 13 13:20:02.082586 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:20:02.082596 kernel: kvm-clock: using sched offset of 2526230005 cycles Dec 13 13:20:02.082605 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:20:02.082616 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:20:02.082626 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:20:02.082636 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:20:02.082646 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 13:20:02.082658 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:20:02.082668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:20:02.082678 kernel: Using GB pages for direct mapping Dec 13 13:20:02.082688 kernel: ACPI: Early table checksum verification disabled Dec 13 13:20:02.082698 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 13:20:02.082707 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082717 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082727 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082737 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 13:20:02.082749 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082759 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082769 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082778 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:20:02.082788 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 13:20:02.082798 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 13:20:02.082812 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 13:20:02.082825 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 13:20:02.082835 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 13:20:02.082853 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 13:20:02.082864 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 13:20:02.082874 kernel: No NUMA configuration found Dec 13 13:20:02.082884 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 13:20:02.082894 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 13:20:02.082907 kernel: Zone ranges: Dec 13 13:20:02.082917 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:20:02.082927 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 13:20:02.082937 kernel: Normal empty Dec 13 13:20:02.082948 kernel: Movable zone start for each node Dec 13 13:20:02.082958 kernel: Early memory node ranges Dec 13 13:20:02.082968 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:20:02.082978 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 13:20:02.082988 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 13:20:02.083001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:20:02.083011 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:20:02.083021 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 13:20:02.083031 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:20:02.083041 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:20:02.083052 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:20:02.083062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:20:02.083072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:20:02.083082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:20:02.083092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:20:02.083105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:20:02.083115 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:20:02.083125 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:20:02.083136 kernel: TSC deadline timer available Dec 13 13:20:02.083146 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:20:02.083156 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:20:02.083166 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:20:02.083176 kernel: kvm-guest: setup PV sched yield Dec 13 13:20:02.083186 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:20:02.083199 kernel: Booting paravirtualized kernel on KVM Dec 13 13:20:02.083209 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:20:02.083220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:20:02.083230 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:20:02.083240 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:20:02.083250 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:20:02.083260 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:20:02.083270 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:20:02.083281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:20:02.083295 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:20:02.083305 kernel: random: crng init done Dec 13 13:20:02.083315 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:20:02.083325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:20:02.083336 kernel: Fallback order for Node 0: 0 Dec 13 13:20:02.083346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 13:20:02.083367 kernel: Policy zone: DMA32 Dec 13 13:20:02.083377 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:20:02.083391 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 138948K reserved, 0K cma-reserved) Dec 13 13:20:02.083401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:20:02.083411 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:20:02.083421 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:20:02.083431 kernel: Dynamic Preempt: voluntary Dec 13 13:20:02.083441 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:20:02.083453 kernel: rcu: RCU event tracing is enabled. Dec 13 13:20:02.083463 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:20:02.083473 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:20:02.083486 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:20:02.083496 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:20:02.083507 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:20:02.083517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:20:02.083527 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:20:02.083538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:20:02.083548 kernel: Console: colour VGA+ 80x25 Dec 13 13:20:02.083558 kernel: printk: console [ttyS0] enabled Dec 13 13:20:02.083568 kernel: ACPI: Core revision 20230628 Dec 13 13:20:02.083578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:20:02.083591 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:20:02.083601 kernel: x2apic enabled Dec 13 13:20:02.083611 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:20:02.083622 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:20:02.083632 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:20:02.083642 kernel: kvm-guest: setup PV IPIs Dec 13 13:20:02.083664 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:20:02.083674 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:20:02.083685 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:20:02.083696 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:20:02.083706 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:20:02.083719 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:20:02.083730 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:20:02.083741 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:20:02.083752 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:20:02.083763 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:20:02.083776 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:20:02.083786 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:20:02.083797 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:20:02.083808 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:20:02.083819 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:20:02.083831 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:20:02.083923 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:20:02.083938 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:20:02.083954 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:20:02.083964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:20:02.083975 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:20:02.083986 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:20:02.083997 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:20:02.084007 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:20:02.084018 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:20:02.084029 kernel: landlock: Up and running. Dec 13 13:20:02.084039 kernel: SELinux: Initializing. Dec 13 13:20:02.084053 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:20:02.084063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:20:02.084074 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:20:02.084085 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:20:02.084096 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:20:02.084107 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:20:02.084118 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:20:02.084131 kernel: ... version: 0 Dec 13 13:20:02.084143 kernel: ... bit width: 48 Dec 13 13:20:02.084157 kernel: ... generic registers: 6 Dec 13 13:20:02.084168 kernel: ... value mask: 0000ffffffffffff Dec 13 13:20:02.084178 kernel: ... max period: 00007fffffffffff Dec 13 13:20:02.084189 kernel: ... fixed-purpose events: 0 Dec 13 13:20:02.084199 kernel: ... event mask: 000000000000003f Dec 13 13:20:02.084210 kernel: signal: max sigframe size: 1776 Dec 13 13:20:02.084221 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:20:02.084232 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:20:02.084243 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:20:02.084256 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:20:02.084266 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:20:02.084277 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:20:02.084288 kernel: smpboot: Max logical packages: 1 Dec 13 13:20:02.084299 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:20:02.084309 kernel: devtmpfs: initialized Dec 13 13:20:02.084320 kernel: x86/mm: Memory block size: 128MB Dec 13 13:20:02.084331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:20:02.084342 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:20:02.084370 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:20:02.084381 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:20:02.084392 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:20:02.084402 kernel: audit: type=2000 audit(1734096001.531:1): state=initialized audit_enabled=0 res=1 Dec 13 13:20:02.084413 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:20:02.084424 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:20:02.084434 kernel: cpuidle: using governor menu Dec 13 13:20:02.084445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:20:02.084456 kernel: dca service started, version 1.12.1 Dec 13 13:20:02.084470 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:20:02.084481 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:20:02.084491 kernel: PCI: Using configuration type 1 for base access Dec 13 13:20:02.084502 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:20:02.084513 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:20:02.084524 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:20:02.084535 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:20:02.084545 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:20:02.084556 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:20:02.084569 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:20:02.084580 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:20:02.084591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:20:02.084601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:20:02.084612 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:20:02.084623 kernel: ACPI: Interpreter enabled Dec 13 13:20:02.084633 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:20:02.084644 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:20:02.084655 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:20:02.084668 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:20:02.084679 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:20:02.084690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:20:02.084907 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:20:02.085060 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:20:02.085205 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:20:02.085219 kernel: PCI host bridge to bus 0000:00 Dec 13 13:20:02.085386 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:20:02.085527 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:20:02.085657 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:20:02.085786 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 13:20:02.085924 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:20:02.086055 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 13:20:02.086190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:20:02.086372 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:20:02.086529 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:20:02.086673 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 13:20:02.086816 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 13:20:02.086969 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 13:20:02.087112 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:20:02.087269 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:20:02.087434 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:20:02.087580 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 13:20:02.087724 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 13:20:02.087883 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:20:02.088028 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:20:02.088172 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 13:20:02.088315 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 13:20:02.088487 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:20:02.088632 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 13:20:02.088774 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 13:20:02.088931 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 13:20:02.089076 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 13:20:02.089234 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:20:02.089402 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:20:02.089557 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:20:02.089699 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 13:20:02.089865 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 13:20:02.090027 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:20:02.090171 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:20:02.090185 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:20:02.090201 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:20:02.090212 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:20:02.090223 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:20:02.090234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:20:02.090244 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:20:02.090255 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:20:02.090266 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:20:02.090276 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:20:02.090287 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:20:02.090300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:20:02.090311 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:20:02.090321 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:20:02.090332 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:20:02.090342 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:20:02.090366 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:20:02.090377 kernel: iommu: Default domain type: Translated Dec 13 13:20:02.090387 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:20:02.090397 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:20:02.090411 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:20:02.090422 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:20:02.090433 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 13:20:02.090585 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:20:02.090728 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:20:02.090879 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:20:02.090893 kernel: vgaarb: loaded Dec 13 13:20:02.090904 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:20:02.090915 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:20:02.090930 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:20:02.090940 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:20:02.090951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:20:02.090962 kernel: pnp: PnP ACPI init Dec 13 13:20:02.091114 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:20:02.091129 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:20:02.091140 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:20:02.091150 kernel: NET: Registered PF_INET protocol family Dec 13 13:20:02.091165 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:20:02.091176 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:20:02.091186 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:20:02.091197 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:20:02.091208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:20:02.091219 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:20:02.091230 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:20:02.091241 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:20:02.091251 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:20:02.091265 kernel: NET: Registered PF_XDP protocol family Dec 13 13:20:02.091412 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:20:02.091543 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:20:02.091674 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:20:02.091806 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 13:20:02.091948 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:20:02.092083 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 13:20:02.092097 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:20:02.092113 kernel: Initialise system trusted keyrings Dec 13 13:20:02.092124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:20:02.092135 kernel: Key type asymmetric registered Dec 13 13:20:02.092146 kernel: Asymmetric key parser 'x509' registered Dec 13 13:20:02.092157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:20:02.092168 kernel: io scheduler mq-deadline registered Dec 13 13:20:02.092179 kernel: io scheduler kyber registered Dec 13 13:20:02.092190 kernel: io scheduler bfq registered Dec 13 13:20:02.092201 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:20:02.092215 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:20:02.092226 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:20:02.092237 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:20:02.092248 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:20:02.092260 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:20:02.092271 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:20:02.092282 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:20:02.092293 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:20:02.093580 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:20:02.093766 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:20:02.093916 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:20:01 UTC (1734096001) Dec 13 13:20:02.094057 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 13:20:02.094072 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:20:02.094085 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:20:02.094097 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:20:02.094108 kernel: Segment Routing with IPv6 Dec 13 13:20:02.094120 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:20:02.094136 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:20:02.094147 kernel: Key type dns_resolver registered Dec 13 13:20:02.094158 kernel: IPI shorthand broadcast: enabled Dec 13 13:20:02.094170 kernel: sched_clock: Marking stable (790004059, 105229856)->(951513447, -56279532) Dec 13 13:20:02.094182 kernel: registered taskstats version 1 Dec 13 13:20:02.094193 kernel: Loading compiled-in X.509 certificates Dec 13 13:20:02.094206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:20:02.094217 kernel: Key type .fscrypt registered Dec 13 13:20:02.094228 kernel: Key type fscrypt-provisioning registered Dec 13 13:20:02.094242 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:20:02.094254 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:20:02.094265 kernel: ima: No architecture policies found Dec 13 13:20:02.094277 kernel: clk: Disabling unused clocks Dec 13 13:20:02.094287 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:20:02.094298 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:20:02.094309 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:20:02.094319 kernel: Run /init as init process Dec 13 13:20:02.094330 kernel: with arguments: Dec 13 13:20:02.094344 kernel: /init Dec 13 13:20:02.094369 kernel: with environment: Dec 13 13:20:02.094380 kernel: HOME=/ Dec 13 13:20:02.094391 kernel: TERM=linux Dec 13 13:20:02.094402 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:20:02.094418 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:20:02.094433 systemd[1]: Detected virtualization kvm. Dec 13 13:20:02.094445 systemd[1]: Detected architecture x86-64. Dec 13 13:20:02.094462 systemd[1]: Running in initrd. Dec 13 13:20:02.094474 systemd[1]: No hostname configured, using default hostname. Dec 13 13:20:02.094486 systemd[1]: Hostname set to . Dec 13 13:20:02.094499 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:20:02.094510 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:20:02.094523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:20:02.094535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:20:02.094548 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:20:02.094577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:20:02.094592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:20:02.094605 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:20:02.094619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:20:02.094635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:20:02.094647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:20:02.094660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:20:02.094672 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:20:02.094684 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:20:02.094700 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:20:02.094712 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:20:02.094725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:20:02.094737 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:20:02.094755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:20:02.094767 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:20:02.094780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:20:02.094792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:20:02.094804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:20:02.094817 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:20:02.094829 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:20:02.094851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:20:02.094866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:20:02.094879 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:20:02.094891 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:20:02.094903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:20:02.094916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:20:02.094928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:20:02.094940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:20:02.094953 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:20:02.094969 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:20:02.095012 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 13:20:02.095048 systemd-journald[193]: Journal started Dec 13 13:20:02.095075 systemd-journald[193]: Runtime Journal (/run/log/journal/5664724e431b4242a294a4fb7d31b1f8) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:20:02.098852 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:20:02.108530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:20:02.121381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:20:02.162458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:20:02.122294 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 13:20:02.174988 kernel: Bridge firewalling registered Dec 13 13:20:02.165793 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 13:20:02.172684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:20:02.176108 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:20:02.185564 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:20:02.188298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:20:02.189656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:20:02.191411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:20:02.203050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:20:02.204765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:20:02.211556 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:20:02.212080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:20:02.215167 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:20:02.230433 dracut-cmdline[230]: dracut-dracut-053 Dec 13 13:20:02.233853 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:20:02.249595 systemd-resolved[228]: Positive Trust Anchors: Dec 13 13:20:02.249608 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:20:02.249639 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:20:02.252194 systemd-resolved[228]: Defaulting to hostname 'linux'. Dec 13 13:20:02.253327 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:20:02.258444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:20:02.325400 kernel: SCSI subsystem initialized Dec 13 13:20:02.335390 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:20:02.345380 kernel: iscsi: registered transport (tcp) Dec 13 13:20:02.366447 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:20:02.366518 kernel: QLogic iSCSI HBA Driver Dec 13 13:20:02.418976 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:20:02.430466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:20:02.457493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:20:02.457520 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:20:02.458524 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:20:02.500375 kernel: raid6: avx2x4 gen() 30484 MB/s Dec 13 13:20:02.517378 kernel: raid6: avx2x2 gen() 31502 MB/s Dec 13 13:20:02.534435 kernel: raid6: avx2x1 gen() 25851 MB/s Dec 13 13:20:02.534456 kernel: raid6: using algorithm avx2x2 gen() 31502 MB/s Dec 13 13:20:02.552440 kernel: raid6: .... xor() 19945 MB/s, rmw enabled Dec 13 13:20:02.552468 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:20:02.573378 kernel: xor: automatically using best checksumming function avx Dec 13 13:20:02.718385 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:20:02.731788 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:20:02.743587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:20:02.755310 systemd-udevd[414]: Using default interface naming scheme 'v255'. Dec 13 13:20:02.759539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:20:02.771535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:20:02.786903 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Dec 13 13:20:02.820496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:20:02.835550 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:20:02.900784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:20:02.913595 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:20:02.926085 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:20:02.929686 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:20:02.931003 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:20:02.931340 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:20:02.941403 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:20:02.958591 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:20:02.958755 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:20:02.958768 kernel: GPT:9289727 != 19775487 Dec 13 13:20:02.958778 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:20:02.958789 kernel: GPT:9289727 != 19775487 Dec 13 13:20:02.958804 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:20:02.958814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:20:02.958834 kernel: libata version 3.00 loaded. Dec 13 13:20:02.942658 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:20:02.960038 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:20:02.960963 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:20:02.970937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:20:02.976538 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:20:03.006208 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:20:03.006229 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:20:03.006406 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:20:03.006778 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) Dec 13 13:20:03.006791 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Dec 13 13:20:03.006802 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:20:03.006812 kernel: AES CTR mode by8 optimization enabled Dec 13 13:20:03.006833 kernel: scsi host0: ahci Dec 13 13:20:03.006996 kernel: scsi host1: ahci Dec 13 13:20:03.007142 kernel: scsi host2: ahci Dec 13 13:20:03.007286 kernel: scsi host3: ahci Dec 13 13:20:03.007452 kernel: scsi host4: ahci Dec 13 13:20:03.007596 kernel: scsi host5: ahci Dec 13 13:20:03.007742 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 13:20:03.007754 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 13:20:03.007765 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 13:20:03.007776 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 13:20:03.007790 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 13:20:03.007800 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 13:20:02.971060 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:20:02.974499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:20:02.977588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:20:02.977721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:20:02.981530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:20:02.992573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:20:03.005297 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:20:03.051489 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:20:03.052124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:20:03.066859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:20:03.071637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:20:03.072100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:20:03.086482 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:20:03.087867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:20:03.100561 disk-uuid[556]: Primary Header is updated. Dec 13 13:20:03.100561 disk-uuid[556]: Secondary Entries is updated. Dec 13 13:20:03.100561 disk-uuid[556]: Secondary Header is updated. Dec 13 13:20:03.104395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:20:03.119233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:20:03.313982 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:20:03.314071 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:20:03.314103 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:20:03.315421 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:20:03.315502 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:20:03.316382 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:20:03.317384 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:20:03.318926 kernel: ata3.00: applying bridge limits Dec 13 13:20:03.318946 kernel: ata3.00: configured for UDMA/100 Dec 13 13:20:03.319385 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:20:03.383401 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:20:03.401108 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:20:03.401126 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:20:04.113410 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:20:04.113577 disk-uuid[559]: The operation has completed successfully. Dec 13 13:20:04.143789 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:20:04.143959 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:20:04.186616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:20:04.190295 sh[592]: Success Dec 13 13:20:04.205388 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:20:04.238949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:20:04.251874 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:20:04.254030 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:20:04.269702 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:20:04.269739 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:20:04.269751 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:20:04.270710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:20:04.271452 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:20:04.276470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:20:04.278859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:20:04.296484 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:20:04.297594 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:20:04.311013 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:20:04.311067 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:20:04.311082 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:20:04.314440 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:20:04.323803 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:20:04.325650 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:20:04.333611 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:20:04.341525 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:20:04.401671 ignition[690]: Ignition 2.20.0 Dec 13 13:20:04.401682 ignition[690]: Stage: fetch-offline Dec 13 13:20:04.401717 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:04.401727 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:04.401828 ignition[690]: parsed url from cmdline: "" Dec 13 13:20:04.401832 ignition[690]: no config URL provided Dec 13 13:20:04.401837 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:20:04.401848 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:20:04.401876 ignition[690]: op(1): [started] loading QEMU firmware config module Dec 13 13:20:04.401882 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:20:04.411548 ignition[690]: op(1): [finished] loading QEMU firmware config module Dec 13 13:20:04.431253 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:20:04.446514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:20:04.454112 ignition[690]: parsing config with SHA512: a021dceaac270f8ef0fb34f7b35a38e918aaaedab01ae46d164720aa6a10b35fc2f9f9e55514de5339bc9e6793171a9c57890f1ad180a53d7c70d6fe3365c2ae Dec 13 13:20:04.458199 unknown[690]: fetched base config from "system" Dec 13 13:20:04.458216 unknown[690]: fetched user config from "qemu" Dec 13 13:20:04.460803 ignition[690]: fetch-offline: fetch-offline passed Dec 13 13:20:04.460899 ignition[690]: Ignition finished successfully Dec 13 13:20:04.464797 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:20:04.470515 systemd-networkd[781]: lo: Link UP Dec 13 13:20:04.470526 systemd-networkd[781]: lo: Gained carrier Dec 13 13:20:04.472097 systemd-networkd[781]: Enumeration completed Dec 13 13:20:04.472585 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:20:04.472590 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:20:04.473303 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:20:04.474488 systemd-networkd[781]: eth0: Link UP Dec 13 13:20:04.474497 systemd-networkd[781]: eth0: Gained carrier Dec 13 13:20:04.474510 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:20:04.483569 systemd[1]: Reached target network.target - Network. Dec 13 13:20:04.485659 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:20:04.498542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:20:04.503415 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:20:04.512553 ignition[784]: Ignition 2.20.0 Dec 13 13:20:04.512566 ignition[784]: Stage: kargs Dec 13 13:20:04.512763 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:04.512790 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:04.513824 ignition[784]: kargs: kargs passed Dec 13 13:20:04.513876 ignition[784]: Ignition finished successfully Dec 13 13:20:04.518632 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:20:04.528497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:20:04.541770 ignition[794]: Ignition 2.20.0 Dec 13 13:20:04.541795 ignition[794]: Stage: disks Dec 13 13:20:04.541971 ignition[794]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:04.541983 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:04.545885 ignition[794]: disks: disks passed Dec 13 13:20:04.545936 ignition[794]: Ignition finished successfully Dec 13 13:20:04.548823 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:20:04.550920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:20:04.551383 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:20:04.551755 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:20:04.552179 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:20:04.552542 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:20:04.567535 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:20:04.578220 systemd-resolved[228]: Detected conflict on linux IN A 10.0.0.28 Dec 13 13:20:04.578233 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Dec 13 13:20:04.581824 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:20:04.588148 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:20:04.600529 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:20:04.689393 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:20:04.690074 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:20:04.692729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:20:04.707580 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:20:04.711102 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:20:04.712513 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:20:04.712574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:20:04.712601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:20:04.720642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:20:04.728342 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:20:04.737488 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Dec 13 13:20:04.737524 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:20:04.737538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:20:04.737565 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:20:04.747124 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:20:04.750334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:20:04.781630 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:20:04.787277 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:20:04.791202 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:20:04.796614 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:20:04.885074 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:20:04.897450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:20:04.900686 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:20:04.905374 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:20:04.925765 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:20:04.928104 ignition[925]: INFO : Ignition 2.20.0 Dec 13 13:20:04.928104 ignition[925]: INFO : Stage: mount Dec 13 13:20:04.929715 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:04.929715 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:04.929715 ignition[925]: INFO : mount: mount passed Dec 13 13:20:04.929715 ignition[925]: INFO : Ignition finished successfully Dec 13 13:20:04.935245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:20:04.946509 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:20:05.268939 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:20:05.290601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:20:05.297380 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Dec 13 13:20:05.299429 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:20:05.299455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:20:05.299469 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:20:05.302379 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:20:05.303905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:20:05.322067 ignition[958]: INFO : Ignition 2.20.0 Dec 13 13:20:05.322067 ignition[958]: INFO : Stage: files Dec 13 13:20:05.324101 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:05.324101 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:05.324101 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:20:05.324101 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:20:05.324101 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:20:05.331416 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:20:05.331416 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:20:05.331416 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:20:05.331416 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:20:05.331416 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:20:05.326313 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 13:20:05.363334 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:20:05.447156 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:20:05.449845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:20:05.449845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:20:05.745798 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:20:05.880909 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:20:05.883002 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:20:05.938501 systemd-networkd[781]: eth0: Gained IPv6LL Dec 13 13:20:06.342433 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:20:06.746420 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:20:06.746420 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:20:06.750663 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:20:06.753233 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:20:06.753233 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:20:06.753233 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:20:06.758413 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:20:06.758413 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:20:06.762646 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:20:06.762646 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:20:06.787453 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:20:06.792924 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:20:06.794677 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:20:06.794677 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:20:06.797484 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:20:06.798949 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:20:06.800752 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:20:06.802466 ignition[958]: INFO : files: files passed Dec 13 13:20:06.803248 ignition[958]: INFO : Ignition finished successfully Dec 13 13:20:06.806601 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:20:06.814625 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:20:06.815770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:20:06.824366 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:20:06.824500 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:20:06.830257 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:20:06.834710 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:20:06.834710 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:20:06.839476 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:20:06.843119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:20:06.843741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:20:06.854685 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:20:06.886422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:20:06.886562 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:20:06.889042 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:20:06.891178 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:20:06.891621 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:20:06.892491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:20:06.912740 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:20:06.926553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:20:06.938981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:20:06.940349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:20:06.942680 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:20:06.944766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:20:06.944915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:20:06.947119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:20:06.948903 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:20:06.951176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:20:06.953531 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:20:06.955733 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:20:06.957966 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:20:06.960260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:20:06.962761 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:20:06.964971 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:20:06.967253 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:20:06.969120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:20:06.969302 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:20:06.971472 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:20:06.973157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:20:06.975283 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:20:06.975449 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:20:06.977604 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:20:06.977762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:20:06.979961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:20:06.980112 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:20:06.982152 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:20:06.983901 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:20:06.987420 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:20:06.989627 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:20:06.991653 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:20:06.993432 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:20:06.993552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:20:06.995473 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:20:06.995583 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:20:06.997937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:20:06.998080 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:20:07.000473 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:20:07.000606 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:20:07.022557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:20:07.023697 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:20:07.023851 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:20:07.029129 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:20:07.031283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:20:07.032664 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:20:07.036601 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:20:07.037975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:20:07.040650 ignition[1014]: INFO : Ignition 2.20.0 Dec 13 13:20:07.040650 ignition[1014]: INFO : Stage: umount Dec 13 13:20:07.040650 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:20:07.040650 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:20:07.046755 ignition[1014]: INFO : umount: umount passed Dec 13 13:20:07.046755 ignition[1014]: INFO : Ignition finished successfully Dec 13 13:20:07.044161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:20:07.044317 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:20:07.047763 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:20:07.047900 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:20:07.052293 systemd[1]: Stopped target network.target - Network. Dec 13 13:20:07.057740 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:20:07.057815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:20:07.060248 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:20:07.060309 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:20:07.062787 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:20:07.062847 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:20:07.065065 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:20:07.065130 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:20:07.067658 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:20:07.069952 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:20:07.071404 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 13 13:20:07.073532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:20:07.075641 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:20:07.075819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:20:07.079714 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:20:07.079928 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:20:07.082491 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:20:07.082560 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:20:07.092544 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:20:07.093650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:20:07.094956 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:20:07.097591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:20:07.099345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:20:07.101941 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:20:07.103169 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:20:07.105679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:20:07.106913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:20:07.111086 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:20:07.122914 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:20:07.123090 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:20:07.124735 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:20:07.124803 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:20:07.127428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:20:07.127479 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:20:07.127809 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:20:07.127870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:20:07.128823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:20:07.128871 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:20:07.136336 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:20:07.136397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:20:07.145130 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:20:07.145857 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:20:07.145907 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:20:07.146282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:20:07.146325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:20:07.147163 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:20:07.147277 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:20:07.163082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:20:07.164440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:20:07.259326 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:20:07.259478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:20:07.261487 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:20:07.262561 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:20:07.262613 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:20:07.278588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:20:07.288518 systemd[1]: Switching root. Dec 13 13:20:07.319173 systemd-journald[193]: Journal stopped Dec 13 13:20:08.554947 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 13:20:08.555003 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:20:08.555023 kernel: SELinux: policy capability open_perms=1 Dec 13 13:20:08.555035 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:20:08.555050 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:20:08.555062 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:20:08.555073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:20:08.555086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:20:08.555097 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:20:08.555114 kernel: audit: type=1403 audit(1734096007.805:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:20:08.555134 systemd[1]: Successfully loaded SELinux policy in 38.910ms. Dec 13 13:20:08.555150 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.546ms. Dec 13 13:20:08.555163 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:20:08.555175 systemd[1]: Detected virtualization kvm. Dec 13 13:20:08.555188 systemd[1]: Detected architecture x86-64. Dec 13 13:20:08.555202 systemd[1]: Detected first boot. Dec 13 13:20:08.555214 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:20:08.555226 zram_generator::config[1060]: No configuration found. Dec 13 13:20:08.555239 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:20:08.555251 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:20:08.555264 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:20:08.555276 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:20:08.555288 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:20:08.555300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:20:08.555315 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:20:08.555329 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:20:08.555341 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:20:08.555440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:20:08.555453 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:20:08.555465 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:20:08.555478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:20:08.555491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:20:08.555506 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:20:08.555518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:20:08.555530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:20:08.555543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:20:08.555555 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:20:08.555567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:20:08.555579 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:20:08.555591 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:20:08.555604 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:20:08.555618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:20:08.555630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:20:08.555643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:20:08.555655 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:20:08.555667 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:20:08.555680 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:20:08.555701 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:20:08.555717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:20:08.555732 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:20:08.555744 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:20:08.555756 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:20:08.555768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:20:08.555785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:20:08.555796 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:20:08.555809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:08.555821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:20:08.555833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:20:08.555851 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:20:08.555864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:20:08.555876 systemd[1]: Reached target machines.target - Containers. Dec 13 13:20:08.555888 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:20:08.555900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:20:08.555912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:20:08.555924 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:20:08.555936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:20:08.555951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:20:08.555964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:20:08.555976 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:20:08.555989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:20:08.556001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:20:08.556013 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:20:08.556025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:20:08.556037 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:20:08.556049 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:20:08.556063 kernel: fuse: init (API version 7.39) Dec 13 13:20:08.556075 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:20:08.556087 kernel: loop: module loaded Dec 13 13:20:08.556098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:20:08.556111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:20:08.556139 systemd-journald[1127]: Collecting audit messages is disabled. Dec 13 13:20:08.556161 systemd-journald[1127]: Journal started Dec 13 13:20:08.556186 systemd-journald[1127]: Runtime Journal (/run/log/journal/5664724e431b4242a294a4fb7d31b1f8) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:20:08.302411 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:20:08.320446 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:20:08.320869 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:20:08.560544 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:20:08.564372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:20:08.564404 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:20:08.566008 systemd[1]: Stopped verity-setup.service. Dec 13 13:20:08.570644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:08.570680 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:20:08.572593 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:20:08.573840 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:20:08.575116 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:20:08.576283 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:20:08.577930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:20:08.579526 kernel: ACPI: bus type drm_connector registered Dec 13 13:20:08.579900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:20:08.581211 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:20:08.582870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:20:08.584482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:20:08.584656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:20:08.586206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:20:08.586395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:20:08.587947 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:20:08.588120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:20:08.589738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:20:08.589911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:20:08.591450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:20:08.591620 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:20:08.593095 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:20:08.593264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:20:08.594668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:20:08.596080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:20:08.597769 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:20:08.614970 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:20:08.625427 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:20:08.628544 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:20:08.629778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:20:08.629867 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:20:08.632042 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:20:08.634486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:20:08.639241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:20:08.640883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:20:08.642612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:20:08.659591 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:20:08.660964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:20:08.662907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:20:08.664149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:20:08.670719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:20:08.673255 systemd-journald[1127]: Time spent on flushing to /var/log/journal/5664724e431b4242a294a4fb7d31b1f8 is 19.174ms for 953 entries. Dec 13 13:20:08.673255 systemd-journald[1127]: System Journal (/var/log/journal/5664724e431b4242a294a4fb7d31b1f8) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:20:08.703653 systemd-journald[1127]: Received client request to flush runtime journal. Dec 13 13:20:08.676108 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:20:08.680503 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:20:08.683409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:20:08.685003 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:20:08.686341 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:20:08.688078 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:20:08.690048 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:20:08.696908 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:20:08.710673 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:20:08.711862 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:20:08.718234 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:20:08.720078 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:20:08.722618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:20:08.732853 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:20:08.733499 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:20:08.738228 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:20:08.740397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:20:08.740604 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:20:08.747520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:20:08.765391 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:20:08.783490 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 13:20:08.783509 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 13:20:08.789148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:20:08.809389 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 13:20:08.849471 kernel: loop3: detected capacity change from 0 to 138184 Dec 13 13:20:08.861653 kernel: loop4: detected capacity change from 0 to 141000 Dec 13 13:20:08.877381 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 13:20:08.899597 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:20:08.901149 (sd-merge)[1198]: Merged extensions into '/usr'. Dec 13 13:20:08.910256 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:20:08.910271 systemd[1]: Reloading... Dec 13 13:20:08.976378 zram_generator::config[1227]: No configuration found. Dec 13 13:20:09.019402 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:20:09.121558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:20:09.186321 systemd[1]: Reloading finished in 275 ms. Dec 13 13:20:09.225637 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:20:09.227176 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:20:09.242546 systemd[1]: Starting ensure-sysext.service... Dec 13 13:20:09.244678 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:20:09.252591 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:20:09.252606 systemd[1]: Reloading... Dec 13 13:20:09.273462 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:20:09.273755 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:20:09.274891 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:20:09.275279 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 13:20:09.278412 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 13:20:09.283878 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:20:09.285453 systemd-tmpfiles[1262]: Skipping /boot Dec 13 13:20:09.304224 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:20:09.304361 systemd-tmpfiles[1262]: Skipping /boot Dec 13 13:20:09.307382 zram_generator::config[1292]: No configuration found. Dec 13 13:20:09.411877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:20:09.461035 systemd[1]: Reloading finished in 208 ms. Dec 13 13:20:09.479089 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:20:09.491071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:20:09.501921 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:20:09.504687 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:20:09.507898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:20:09.513234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:20:09.517326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:20:09.527438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:20:09.532067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.532293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:20:09.534033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:20:09.539839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:20:09.546742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:20:09.548545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:20:09.553930 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:20:09.555346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.557427 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:20:09.559759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:20:09.560475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:20:09.563137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:20:09.565309 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Dec 13 13:20:09.573689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:20:09.574410 augenrules[1357]: No rules Dec 13 13:20:09.576139 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:20:09.576419 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:20:09.578339 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:20:09.578590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:20:09.590739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.591025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:20:09.601794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:20:09.605809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:20:09.612621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:20:09.617247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:20:09.619617 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:20:09.620882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.623215 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:20:09.625677 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:20:09.627892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:20:09.629727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:20:09.632492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:20:09.632756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:20:09.636895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:20:09.637364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:20:09.639774 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:20:09.639993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:20:09.650608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:20:09.657387 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1387) Dec 13 13:20:09.660389 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1387) Dec 13 13:20:09.671305 systemd[1]: Finished ensure-sysext.service. Dec 13 13:20:09.674502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:20:09.674904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.686648 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:20:09.687879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:20:09.689087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:20:09.691449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:20:09.693499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:20:09.696698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:20:09.697949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:20:09.700751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:20:09.707289 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:20:09.708508 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:20:09.708546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:20:09.709147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:20:09.709396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:20:09.725915 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:20:09.726161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:20:09.729593 augenrules[1403]: /sbin/augenrules: No change Dec 13 13:20:09.733865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:20:09.734121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:20:09.735683 systemd-resolved[1331]: Positive Trust Anchors: Dec 13 13:20:09.735926 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:20:09.735989 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:20:09.736066 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:20:09.736440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:20:09.738675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:20:09.738775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:20:09.741691 systemd-resolved[1331]: Defaulting to hostname 'linux'. Dec 13 13:20:09.744289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:20:09.745648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:20:09.748371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Dec 13 13:20:09.751430 augenrules[1437]: No rules Dec 13 13:20:09.752970 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:20:09.753253 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:20:09.764897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:20:09.773379 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:20:09.775559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:20:09.780400 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:20:09.790615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:20:09.803705 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:20:09.804078 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:20:09.810796 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:20:09.814452 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:20:09.807090 systemd-networkd[1410]: lo: Link UP Dec 13 13:20:09.807095 systemd-networkd[1410]: lo: Gained carrier Dec 13 13:20:09.807111 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:20:09.808592 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:20:09.808755 systemd-networkd[1410]: Enumeration completed Dec 13 13:20:09.809169 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:20:09.809174 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:20:09.809872 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:20:09.809997 systemd-networkd[1410]: eth0: Link UP Dec 13 13:20:09.810001 systemd-networkd[1410]: eth0: Gained carrier Dec 13 13:20:09.810013 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:20:09.811216 systemd[1]: Reached target network.target - Network. Dec 13 13:20:09.821428 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:20:09.822633 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Dec 13 13:20:09.823600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:20:10.662582 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:20:10.662637 systemd-timesyncd[1416]: Initial clock synchronization to Fri 2024-12-13 13:20:10.662468 UTC. Dec 13 13:20:10.662684 systemd-resolved[1331]: Clock change detected. Flushing caches. Dec 13 13:20:10.698785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:20:10.749713 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:20:10.761914 kernel: kvm_amd: TSC scaling supported Dec 13 13:20:10.761953 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:20:10.761966 kernel: kvm_amd: Nested Paging enabled Dec 13 13:20:10.763170 kernel: kvm_amd: LBR virtualization supported Dec 13 13:20:10.763199 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:20:10.763877 kernel: kvm_amd: Virtual GIF supported Dec 13 13:20:10.784650 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:20:10.813851 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:20:10.837933 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:20:10.839718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:20:10.847995 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:20:10.885882 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:20:10.888284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:20:10.889467 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:20:10.890691 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:20:10.891975 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:20:10.893440 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:20:10.894640 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:20:10.895993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:20:10.897354 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:20:10.897397 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:20:10.898340 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:20:10.900199 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:20:10.903178 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:20:10.913249 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:20:10.915690 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:20:10.917384 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:20:10.918594 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:20:10.919579 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:20:10.920585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:20:10.920620 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:20:10.921733 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:20:10.923879 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:20:10.926742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:20:10.931757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:20:10.932961 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:20:10.933349 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:20:10.936704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:20:10.938929 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:20:10.940438 jq[1467]: false Dec 13 13:20:10.940347 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:20:10.945732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:20:10.957773 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:20:10.959477 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:20:10.960247 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:20:10.962687 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:20:10.966964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:20:10.969536 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:20:10.976592 extend-filesystems[1468]: Found loop3 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found loop4 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found loop5 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found sr0 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda1 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda2 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda3 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found usr Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda4 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda6 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda7 Dec 13 13:20:10.976592 extend-filesystems[1468]: Found vda9 Dec 13 13:20:10.976592 extend-filesystems[1468]: Checking size of /dev/vda9 Dec 13 13:20:11.028110 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:20:10.973104 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:20:10.997910 dbus-daemon[1466]: [system] SELinux support is enabled Dec 13 13:20:11.035036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Dec 13 13:20:11.035073 extend-filesystems[1468]: Resized partition /dev/vda9 Dec 13 13:20:10.973366 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:20:11.035830 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:20:11.036221 jq[1483]: true Dec 13 13:20:10.973785 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:20:11.036497 update_engine[1481]: I20241213 13:20:11.005152 1481 main.cc:92] Flatcar Update Engine starting Dec 13 13:20:11.036497 update_engine[1481]: I20241213 13:20:11.034555 1481 update_check_scheduler.cc:74] Next update check in 3m16s Dec 13 13:20:10.974041 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:20:11.044702 tar[1487]: linux-amd64/helm Dec 13 13:20:10.979161 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:20:10.979408 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:20:11.046386 jq[1490]: true Dec 13 13:20:11.000374 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:20:11.010323 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:20:11.013093 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:20:11.013120 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:20:11.013415 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:20:11.013432 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:20:11.023869 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:20:11.024916 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:20:11.031675 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:20:11.055693 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:20:11.094271 systemd-logind[1478]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:20:11.094296 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:20:11.095007 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:20:11.095007 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:20:11.095007 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:20:11.107170 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Dec 13 13:20:11.097651 systemd-logind[1478]: New seat seat0. Dec 13 13:20:11.098568 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:20:11.098791 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:20:11.104101 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:20:11.116588 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:20:11.117810 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:20:11.118566 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:20:11.120690 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:20:11.229978 containerd[1500]: time="2024-12-13T13:20:11.229845441Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:20:11.253531 containerd[1500]: time="2024-12-13T13:20:11.253450036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255344 containerd[1500]: time="2024-12-13T13:20:11.255287062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255344 containerd[1500]: time="2024-12-13T13:20:11.255323891Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:20:11.255344 containerd[1500]: time="2024-12-13T13:20:11.255341855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:20:11.255559 containerd[1500]: time="2024-12-13T13:20:11.255531831Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:20:11.255559 containerd[1500]: time="2024-12-13T13:20:11.255557088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255649 containerd[1500]: time="2024-12-13T13:20:11.255624324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255649 containerd[1500]: time="2024-12-13T13:20:11.255643280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255865 containerd[1500]: time="2024-12-13T13:20:11.255839528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255865 containerd[1500]: time="2024-12-13T13:20:11.255855558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255914 containerd[1500]: time="2024-12-13T13:20:11.255868102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255914 containerd[1500]: time="2024-12-13T13:20:11.255878271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.255994 containerd[1500]: time="2024-12-13T13:20:11.255967879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.256225 containerd[1500]: time="2024-12-13T13:20:11.256205785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:20:11.256386 containerd[1500]: time="2024-12-13T13:20:11.256322063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:20:11.256386 containerd[1500]: time="2024-12-13T13:20:11.256339917Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:20:11.256449 containerd[1500]: time="2024-12-13T13:20:11.256431729Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:20:11.256504 containerd[1500]: time="2024-12-13T13:20:11.256488796Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:20:11.262293 containerd[1500]: time="2024-12-13T13:20:11.262257657Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:20:11.262497 containerd[1500]: time="2024-12-13T13:20:11.262405975Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:20:11.262497 containerd[1500]: time="2024-12-13T13:20:11.262431833Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:20:11.262497 containerd[1500]: time="2024-12-13T13:20:11.262448635Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:20:11.262497 containerd[1500]: time="2024-12-13T13:20:11.262464344Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:20:11.263118 containerd[1500]: time="2024-12-13T13:20:11.262770929Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:20:11.263494 containerd[1500]: time="2024-12-13T13:20:11.263469089Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:20:11.263697 containerd[1500]: time="2024-12-13T13:20:11.263680916Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:20:11.263769 containerd[1500]: time="2024-12-13T13:20:11.263750667Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:20:11.263843 containerd[1500]: time="2024-12-13T13:20:11.263826139Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:20:11.263902 containerd[1500]: time="2024-12-13T13:20:11.263885600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.263971 containerd[1500]: time="2024-12-13T13:20:11.263953828Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264024 containerd[1500]: time="2024-12-13T13:20:11.264009232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264077 containerd[1500]: time="2024-12-13T13:20:11.264062181Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264172 containerd[1500]: time="2024-12-13T13:20:11.264151018Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264232 containerd[1500]: time="2024-12-13T13:20:11.264221069Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264288 containerd[1500]: time="2024-12-13T13:20:11.264274520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264340 containerd[1500]: time="2024-12-13T13:20:11.264329453Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:20:11.264535 containerd[1500]: time="2024-12-13T13:20:11.264404213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264535 containerd[1500]: time="2024-12-13T13:20:11.264429951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264535 containerd[1500]: time="2024-12-13T13:20:11.264455118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264535 containerd[1500]: time="2024-12-13T13:20:11.264475236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264697 containerd[1500]: time="2024-12-13T13:20:11.264631239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264731 containerd[1500]: time="2024-12-13T13:20:11.264710808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264755 containerd[1500]: time="2024-12-13T13:20:11.264732439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264755 containerd[1500]: time="2024-12-13T13:20:11.264751314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264792 containerd[1500]: time="2024-12-13T13:20:11.264771181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264831 containerd[1500]: time="2024-12-13T13:20:11.264803091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264831 containerd[1500]: time="2024-12-13T13:20:11.264820714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264875 containerd[1500]: time="2024-12-13T13:20:11.264837015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264875 containerd[1500]: time="2024-12-13T13:20:11.264850781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264875 containerd[1500]: time="2024-12-13T13:20:11.264869095Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:20:11.264928 containerd[1500]: time="2024-12-13T13:20:11.264901145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264928 containerd[1500]: time="2024-12-13T13:20:11.264919510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.264970 containerd[1500]: time="2024-12-13T13:20:11.264933957Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:20:11.265112 containerd[1500]: time="2024-12-13T13:20:11.264986185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:20:11.265234 containerd[1500]: time="2024-12-13T13:20:11.265012624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:20:11.265234 containerd[1500]: time="2024-12-13T13:20:11.265214984Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:20:11.267528 containerd[1500]: time="2024-12-13T13:20:11.265362561Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:20:11.267528 containerd[1500]: time="2024-12-13T13:20:11.265385203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.267528 containerd[1500]: time="2024-12-13T13:20:11.265401744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:20:11.267528 containerd[1500]: time="2024-12-13T13:20:11.265414227Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:20:11.267528 containerd[1500]: time="2024-12-13T13:20:11.265426180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.265716615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.265769434Z" level=info msg="Connect containerd service" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.265816422Z" level=info msg="using legacy CRI server" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.265823766Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.265929574Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.266565637Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.266866742Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.266911987Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.266955018Z" level=info msg="Start subscribing containerd event" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.266981497Z" level=info msg="Start recovering state" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.267033264Z" level=info msg="Start event monitor" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.267044175Z" level=info msg="Start snapshots syncer" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.267052991Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.267060906Z" level=info msg="Start streaming server" Dec 13 13:20:11.267648 containerd[1500]: time="2024-12-13T13:20:11.267110710Z" level=info msg="containerd successfully booted in 0.038222s" Dec 13 13:20:11.268292 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:20:11.367189 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:20:11.391947 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:20:11.404879 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:20:11.407345 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:56810.service - OpenSSH per-connection server daemon (10.0.0.1:56810). Dec 13 13:20:11.411758 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:20:11.412047 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:20:11.414978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:20:11.435566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:20:11.442826 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:20:11.449378 tar[1487]: linux-amd64/LICENSE Dec 13 13:20:11.449478 tar[1487]: linux-amd64/README.md Dec 13 13:20:11.453002 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:20:11.454357 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:20:11.465802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:20:11.476605 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 56810 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:11.478597 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:11.487355 systemd-logind[1478]: New session 1 of user core. Dec 13 13:20:11.488668 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:20:11.506760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:20:11.519097 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:20:11.532897 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:20:11.536709 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:20:11.637821 systemd[1562]: Queued start job for default target default.target. Dec 13 13:20:11.653824 systemd[1562]: Created slice app.slice - User Application Slice. Dec 13 13:20:11.653851 systemd[1562]: Reached target paths.target - Paths. Dec 13 13:20:11.653866 systemd[1562]: Reached target timers.target - Timers. Dec 13 13:20:11.655413 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:20:11.667173 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:20:11.667299 systemd[1562]: Reached target sockets.target - Sockets. Dec 13 13:20:11.667320 systemd[1562]: Reached target basic.target - Basic System. Dec 13 13:20:11.667358 systemd[1562]: Reached target default.target - Main User Target. Dec 13 13:20:11.667390 systemd[1562]: Startup finished in 123ms. Dec 13 13:20:11.667990 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:20:11.670647 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:20:11.730916 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:56820.service - OpenSSH per-connection server daemon (10.0.0.1:56820). Dec 13 13:20:11.767607 systemd-networkd[1410]: eth0: Gained IPv6LL Dec 13 13:20:11.770667 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:20:11.772479 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:20:11.780793 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:20:11.783530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:11.785649 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:20:11.805207 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:20:11.805455 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:20:11.807195 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:20:11.809833 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:20:11.813305 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 56820 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:11.814829 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:11.819201 systemd-logind[1478]: New session 2 of user core. Dec 13 13:20:11.829641 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:20:11.884980 sshd[1592]: Connection closed by 10.0.0.1 port 56820 Dec 13 13:20:11.885327 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:11.900296 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:56820.service: Deactivated successfully. Dec 13 13:20:11.902174 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:20:11.903643 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:20:11.910739 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:56834.service - OpenSSH per-connection server daemon (10.0.0.1:56834). Dec 13 13:20:11.912911 systemd-logind[1478]: Removed session 2. Dec 13 13:20:11.946028 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 56834 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:11.947584 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:11.951695 systemd-logind[1478]: New session 3 of user core. Dec 13 13:20:11.969745 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:20:12.024968 sshd[1599]: Connection closed by 10.0.0.1 port 56834 Dec 13 13:20:12.025293 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:12.029709 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:56834.service: Deactivated successfully. Dec 13 13:20:12.031562 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:20:12.032126 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:20:12.032956 systemd-logind[1478]: Removed session 3. Dec 13 13:20:12.384124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:12.385671 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:20:12.389592 systemd[1]: Startup finished in 986ms (kernel) + 6.018s (initrd) + 3.785s (userspace) = 10.790s. Dec 13 13:20:12.396048 agetty[1556]: failed to open credentials directory Dec 13 13:20:12.403853 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:20:12.411989 agetty[1555]: failed to open credentials directory Dec 13 13:20:12.985409 kubelet[1608]: E1213 13:20:12.985279 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:20:12.990438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:20:12.990729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:20:12.991147 systemd[1]: kubelet.service: Consumed 1.083s CPU time. Dec 13 13:20:22.035814 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:40442.service - OpenSSH per-connection server daemon (10.0.0.1:40442). Dec 13 13:20:22.074268 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 40442 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.075617 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.078976 systemd-logind[1478]: New session 4 of user core. Dec 13 13:20:22.089640 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:20:22.141241 sshd[1625]: Connection closed by 10.0.0.1 port 40442 Dec 13 13:20:22.141501 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:22.162840 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:40442.service: Deactivated successfully. Dec 13 13:20:22.164325 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:20:22.165579 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:20:22.166752 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:40458.service - OpenSSH per-connection server daemon (10.0.0.1:40458). Dec 13 13:20:22.167456 systemd-logind[1478]: Removed session 4. Dec 13 13:20:22.205012 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 40458 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.206421 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.209959 systemd-logind[1478]: New session 5 of user core. Dec 13 13:20:22.219637 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:20:22.267428 sshd[1632]: Connection closed by 10.0.0.1 port 40458 Dec 13 13:20:22.267810 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:22.282045 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:40458.service: Deactivated successfully. Dec 13 13:20:22.283749 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:20:22.285025 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:20:22.294757 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:40464.service - OpenSSH per-connection server daemon (10.0.0.1:40464). Dec 13 13:20:22.295709 systemd-logind[1478]: Removed session 5. Dec 13 13:20:22.329910 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 40464 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.331147 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.335274 systemd-logind[1478]: New session 6 of user core. Dec 13 13:20:22.345646 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:20:22.398468 sshd[1639]: Connection closed by 10.0.0.1 port 40464 Dec 13 13:20:22.398807 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:22.410449 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:40464.service: Deactivated successfully. Dec 13 13:20:22.412080 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:20:22.413474 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:20:22.414682 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:40478.service - OpenSSH per-connection server daemon (10.0.0.1:40478). Dec 13 13:20:22.415389 systemd-logind[1478]: Removed session 6. Dec 13 13:20:22.463370 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 40478 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.464761 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.468551 systemd-logind[1478]: New session 7 of user core. Dec 13 13:20:22.478643 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:20:22.536086 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:20:22.536412 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:20:22.555463 sudo[1647]: pam_unix(sudo:session): session closed for user root Dec 13 13:20:22.556852 sshd[1646]: Connection closed by 10.0.0.1 port 40478 Dec 13 13:20:22.557212 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:22.564037 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:40478.service: Deactivated successfully. Dec 13 13:20:22.565700 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:20:22.567008 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:20:22.568286 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:40482.service - OpenSSH per-connection server daemon (10.0.0.1:40482). Dec 13 13:20:22.568982 systemd-logind[1478]: Removed session 7. Dec 13 13:20:22.607751 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 40482 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.609124 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.612859 systemd-logind[1478]: New session 8 of user core. Dec 13 13:20:22.622619 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:20:22.674805 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:20:22.675143 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:20:22.678589 sudo[1656]: pam_unix(sudo:session): session closed for user root Dec 13 13:20:22.684595 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:20:22.684916 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:20:22.705802 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:20:22.732442 augenrules[1678]: No rules Dec 13 13:20:22.734108 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:20:22.734331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:20:22.735527 sudo[1655]: pam_unix(sudo:session): session closed for user root Dec 13 13:20:22.736856 sshd[1654]: Connection closed by 10.0.0.1 port 40482 Dec 13 13:20:22.737167 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:22.746958 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:40482.service: Deactivated successfully. Dec 13 13:20:22.748456 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:20:22.749876 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:20:22.759762 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:40492.service - OpenSSH per-connection server daemon (10.0.0.1:40492). Dec 13 13:20:22.760689 systemd-logind[1478]: Removed session 8. Dec 13 13:20:22.794925 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 40492 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:20:22.796167 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:22.799787 systemd-logind[1478]: New session 9 of user core. Dec 13 13:20:22.816636 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:20:22.869115 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:20:22.869436 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:20:23.122331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:20:23.132712 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:20:23.132863 (dockerd)[1709]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:20:23.133989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:23.281073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:23.285309 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:20:23.395935 kubelet[1724]: E1213 13:20:23.395200 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:20:23.404567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:20:23.404772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:20:23.502118 dockerd[1709]: time="2024-12-13T13:20:23.502033167Z" level=info msg="Starting up" Dec 13 13:20:23.667696 dockerd[1709]: time="2024-12-13T13:20:23.667578793Z" level=info msg="Loading containers: start." Dec 13 13:20:23.833540 kernel: Initializing XFRM netlink socket Dec 13 13:20:23.909179 systemd-networkd[1410]: docker0: Link UP Dec 13 13:20:23.950882 dockerd[1709]: time="2024-12-13T13:20:23.950786039Z" level=info msg="Loading containers: done." Dec 13 13:20:23.964156 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck233481560-merged.mount: Deactivated successfully. Dec 13 13:20:23.965988 dockerd[1709]: time="2024-12-13T13:20:23.965947387Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:20:23.966050 dockerd[1709]: time="2024-12-13T13:20:23.966038839Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:20:23.966165 dockerd[1709]: time="2024-12-13T13:20:23.966141522Z" level=info msg="Daemon has completed initialization" Dec 13 13:20:24.000125 dockerd[1709]: time="2024-12-13T13:20:24.000048972Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:20:24.000310 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:20:24.783995 containerd[1500]: time="2024-12-13T13:20:24.783959084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:20:25.397050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099419666.mount: Deactivated successfully. Dec 13 13:20:26.452038 containerd[1500]: time="2024-12-13T13:20:26.451978641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:26.452720 containerd[1500]: time="2024-12-13T13:20:26.452692851Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 13:20:26.453697 containerd[1500]: time="2024-12-13T13:20:26.453664093Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:26.456288 containerd[1500]: time="2024-12-13T13:20:26.456234294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:26.457312 containerd[1500]: time="2024-12-13T13:20:26.457282600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.673287139s" Dec 13 13:20:26.457359 containerd[1500]: time="2024-12-13T13:20:26.457314240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 13:20:26.480103 containerd[1500]: time="2024-12-13T13:20:26.480058561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:20:28.989174 containerd[1500]: time="2024-12-13T13:20:28.989108622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:28.990651 containerd[1500]: time="2024-12-13T13:20:28.990602484Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 13:20:28.991963 containerd[1500]: time="2024-12-13T13:20:28.991897794Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:28.994713 containerd[1500]: time="2024-12-13T13:20:28.994682016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:28.995941 containerd[1500]: time="2024-12-13T13:20:28.995904890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.515810261s" Dec 13 13:20:28.995941 containerd[1500]: time="2024-12-13T13:20:28.995934425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 13:20:29.022883 containerd[1500]: time="2024-12-13T13:20:29.022840483Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:20:30.370239 containerd[1500]: time="2024-12-13T13:20:30.370181490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:30.370966 containerd[1500]: time="2024-12-13T13:20:30.370925506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 13:20:30.372175 containerd[1500]: time="2024-12-13T13:20:30.372138781Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:30.375109 containerd[1500]: time="2024-12-13T13:20:30.375049160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:30.376493 containerd[1500]: time="2024-12-13T13:20:30.376452653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.353566504s" Dec 13 13:20:30.376493 containerd[1500]: time="2024-12-13T13:20:30.376491917Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 13:20:30.404235 containerd[1500]: time="2024-12-13T13:20:30.404175593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:20:31.578272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192443487.mount: Deactivated successfully. Dec 13 13:20:32.965397 containerd[1500]: time="2024-12-13T13:20:32.965320509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:32.966208 containerd[1500]: time="2024-12-13T13:20:32.966167007Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 13:20:32.967592 containerd[1500]: time="2024-12-13T13:20:32.967529633Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:32.969442 containerd[1500]: time="2024-12-13T13:20:32.969410731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:32.970088 containerd[1500]: time="2024-12-13T13:20:32.970044119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.565828832s" Dec 13 13:20:32.970088 containerd[1500]: time="2024-12-13T13:20:32.970075558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:20:32.997588 containerd[1500]: time="2024-12-13T13:20:32.997545233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:20:33.655277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:20:33.816759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:33.956929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:33.961561 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:20:34.021131 kubelet[2034]: E1213 13:20:34.021003 2034 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:20:34.026650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:20:34.026886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:20:34.337016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175703924.mount: Deactivated successfully. Dec 13 13:20:36.069853 containerd[1500]: time="2024-12-13T13:20:36.069771682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.070938 containerd[1500]: time="2024-12-13T13:20:36.070889860Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:20:36.072369 containerd[1500]: time="2024-12-13T13:20:36.072338176Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.132838 containerd[1500]: time="2024-12-13T13:20:36.132771893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.133949 containerd[1500]: time="2024-12-13T13:20:36.133897774Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.136311574s" Dec 13 13:20:36.133949 containerd[1500]: time="2024-12-13T13:20:36.133936978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:20:36.153769 containerd[1500]: time="2024-12-13T13:20:36.153705899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:20:36.914204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747888146.mount: Deactivated successfully. Dec 13 13:20:36.920530 containerd[1500]: time="2024-12-13T13:20:36.920472153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.921167 containerd[1500]: time="2024-12-13T13:20:36.921125458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:20:36.922293 containerd[1500]: time="2024-12-13T13:20:36.922261069Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.924458 containerd[1500]: time="2024-12-13T13:20:36.924427172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:36.925339 containerd[1500]: time="2024-12-13T13:20:36.925309977Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 771.558032ms" Dec 13 13:20:36.925375 containerd[1500]: time="2024-12-13T13:20:36.925342198Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:20:36.946492 containerd[1500]: time="2024-12-13T13:20:36.946455991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:20:37.890634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353033607.mount: Deactivated successfully. Dec 13 13:20:39.886668 containerd[1500]: time="2024-12-13T13:20:39.886603227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:39.887348 containerd[1500]: time="2024-12-13T13:20:39.887294574Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 13:20:39.888771 containerd[1500]: time="2024-12-13T13:20:39.888735336Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:39.894232 containerd[1500]: time="2024-12-13T13:20:39.894198744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:39.895949 containerd[1500]: time="2024-12-13T13:20:39.895916636Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.949427884s" Dec 13 13:20:39.895949 containerd[1500]: time="2024-12-13T13:20:39.895942946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 13:20:42.496853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:42.508872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:42.529403 systemd[1]: Reloading requested from client PID 2231 ('systemctl') (unit session-9.scope)... Dec 13 13:20:42.529423 systemd[1]: Reloading... Dec 13 13:20:42.619543 zram_generator::config[2273]: No configuration found. Dec 13 13:20:43.092206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:20:43.171976 systemd[1]: Reloading finished in 642 ms. Dec 13 13:20:43.235076 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:20:43.235175 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:20:43.235456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:43.246022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:43.398679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:43.404330 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:20:43.453777 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:20:43.453777 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:20:43.453777 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:20:43.454122 kubelet[2318]: I1213 13:20:43.453874 2318 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:20:43.867826 kubelet[2318]: I1213 13:20:43.867693 2318 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:20:43.867826 kubelet[2318]: I1213 13:20:43.867728 2318 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:20:43.867976 kubelet[2318]: I1213 13:20:43.867961 2318 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:20:43.911949 kubelet[2318]: I1213 13:20:43.911895 2318 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:20:43.916223 kubelet[2318]: E1213 13:20:43.916185 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.936935 kubelet[2318]: I1213 13:20:43.936884 2318 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:20:43.939798 kubelet[2318]: I1213 13:20:43.939765 2318 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:20:43.940029 kubelet[2318]: I1213 13:20:43.940002 2318 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:20:43.940158 kubelet[2318]: I1213 13:20:43.940068 2318 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:20:43.940158 kubelet[2318]: I1213 13:20:43.940082 2318 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:20:43.940270 kubelet[2318]: I1213 13:20:43.940248 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:20:43.940422 kubelet[2318]: I1213 13:20:43.940401 2318 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:20:43.940422 kubelet[2318]: I1213 13:20:43.940422 2318 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:20:43.940468 kubelet[2318]: I1213 13:20:43.940459 2318 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:20:43.940494 kubelet[2318]: I1213 13:20:43.940484 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:20:43.940970 kubelet[2318]: W1213 13:20:43.940922 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.941008 kubelet[2318]: E1213 13:20:43.940977 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.941059 kubelet[2318]: W1213 13:20:43.941015 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.941084 kubelet[2318]: E1213 13:20:43.941064 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.942328 kubelet[2318]: I1213 13:20:43.942294 2318 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:20:43.953359 kubelet[2318]: I1213 13:20:43.953300 2318 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:20:43.953470 kubelet[2318]: W1213 13:20:43.953420 2318 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:20:43.954186 kubelet[2318]: I1213 13:20:43.954158 2318 server.go:1256] "Started kubelet" Dec 13 13:20:43.954543 kubelet[2318]: I1213 13:20:43.954410 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:20:43.954594 kubelet[2318]: I1213 13:20:43.954543 2318 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:20:43.955803 kubelet[2318]: I1213 13:20:43.955419 2318 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:20:43.955803 kubelet[2318]: I1213 13:20:43.955591 2318 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:20:43.955803 kubelet[2318]: I1213 13:20:43.955668 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:20:43.957832 kubelet[2318]: E1213 13:20:43.957802 2318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:20:43.957888 kubelet[2318]: I1213 13:20:43.957849 2318 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:20:43.957946 kubelet[2318]: I1213 13:20:43.957925 2318 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:20:43.958021 kubelet[2318]: I1213 13:20:43.958001 2318 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:20:43.958429 kubelet[2318]: W1213 13:20:43.958380 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.958480 kubelet[2318]: E1213 13:20:43.958434 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.959983 kubelet[2318]: I1213 13:20:43.959962 2318 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:20:43.960126 kubelet[2318]: I1213 13:20:43.960104 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:20:43.960418 kubelet[2318]: E1213 13:20:43.960397 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Dec 13 13:20:43.960678 kubelet[2318]: E1213 13:20:43.960502 2318 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:20:43.961116 kubelet[2318]: I1213 13:20:43.961095 2318 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:20:43.967753 kubelet[2318]: E1213 13:20:43.967715 2318 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf2a124bddd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:20:43.954134488 +0000 UTC m=+0.545200567,LastTimestamp:2024-12-13 13:20:43.954134488 +0000 UTC m=+0.545200567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:20:43.981828 kubelet[2318]: I1213 13:20:43.981785 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:20:43.981828 kubelet[2318]: I1213 13:20:43.981815 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:20:43.981828 kubelet[2318]: I1213 13:20:43.981834 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:20:43.984140 kubelet[2318]: I1213 13:20:43.984096 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:20:43.985618 kubelet[2318]: I1213 13:20:43.985591 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:20:43.985700 kubelet[2318]: I1213 13:20:43.985641 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:20:43.985700 kubelet[2318]: I1213 13:20:43.985688 2318 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:20:43.985765 kubelet[2318]: E1213 13:20:43.985750 2318 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:20:43.986331 kubelet[2318]: W1213 13:20:43.986280 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:43.986387 kubelet[2318]: E1213 13:20:43.986340 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:44.059821 kubelet[2318]: I1213 13:20:44.059769 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:44.060288 kubelet[2318]: E1213 13:20:44.060268 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 13 13:20:44.086701 kubelet[2318]: E1213 13:20:44.086600 2318 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:20:44.161748 kubelet[2318]: E1213 13:20:44.161602 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Dec 13 13:20:44.262462 kubelet[2318]: I1213 13:20:44.262356 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:44.262775 kubelet[2318]: E1213 13:20:44.262746 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 13 13:20:44.287004 kubelet[2318]: E1213 13:20:44.286920 2318 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:20:44.362862 kubelet[2318]: I1213 13:20:44.362784 2318 policy_none.go:49] "None policy: Start" Dec 13 13:20:44.363676 kubelet[2318]: I1213 13:20:44.363650 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:20:44.363676 kubelet[2318]: I1213 13:20:44.363680 2318 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:20:44.427890 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:20:44.439407 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:20:44.442098 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:20:44.455431 kubelet[2318]: I1213 13:20:44.455391 2318 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:20:44.455840 kubelet[2318]: I1213 13:20:44.455745 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:20:44.456630 kubelet[2318]: E1213 13:20:44.456612 2318 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:20:44.562083 kubelet[2318]: E1213 13:20:44.562026 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Dec 13 13:20:44.664788 kubelet[2318]: I1213 13:20:44.664746 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:44.665234 kubelet[2318]: E1213 13:20:44.665205 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 13 13:20:44.687498 kubelet[2318]: I1213 13:20:44.687361 2318 topology_manager.go:215] "Topology Admit Handler" podUID="9de50ba2c613dbc2b4afa0e6b9d002d0" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:20:44.688851 kubelet[2318]: I1213 13:20:44.688812 2318 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:20:44.689840 kubelet[2318]: I1213 13:20:44.689822 2318 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:20:44.695726 systemd[1]: Created slice kubepods-burstable-pod9de50ba2c613dbc2b4afa0e6b9d002d0.slice - libcontainer container kubepods-burstable-pod9de50ba2c613dbc2b4afa0e6b9d002d0.slice. Dec 13 13:20:44.716725 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 13:20:44.735849 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 13:20:44.763112 kubelet[2318]: I1213 13:20:44.763018 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:44.763112 kubelet[2318]: I1213 13:20:44.763093 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:44.763112 kubelet[2318]: I1213 13:20:44.763127 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:44.763330 kubelet[2318]: I1213 13:20:44.763159 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:44.763330 kubelet[2318]: I1213 13:20:44.763189 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:44.763330 kubelet[2318]: I1213 13:20:44.763221 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:44.763330 kubelet[2318]: I1213 13:20:44.763241 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:44.763330 kubelet[2318]: I1213 13:20:44.763261 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:44.763481 kubelet[2318]: I1213 13:20:44.763279 2318 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:20:44.981736 kubelet[2318]: W1213 13:20:44.981546 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:44.981736 kubelet[2318]: E1213 13:20:44.981635 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.014952 kubelet[2318]: E1213 13:20:45.014894 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:45.015763 containerd[1500]: time="2024-12-13T13:20:45.015717820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9de50ba2c613dbc2b4afa0e6b9d002d0,Namespace:kube-system,Attempt:0,}" Dec 13 13:20:45.019959 kubelet[2318]: E1213 13:20:45.019933 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:45.020338 containerd[1500]: time="2024-12-13T13:20:45.020309189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:20:45.038769 kubelet[2318]: E1213 13:20:45.038732 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:45.040249 containerd[1500]: time="2024-12-13T13:20:45.040206231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:20:45.362873 kubelet[2318]: E1213 13:20:45.362724 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Dec 13 13:20:45.457858 kubelet[2318]: W1213 13:20:45.457777 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.457858 kubelet[2318]: E1213 13:20:45.457842 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.467230 kubelet[2318]: I1213 13:20:45.467189 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:45.467577 kubelet[2318]: E1213 13:20:45.467555 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 13 13:20:45.495304 kubelet[2318]: W1213 13:20:45.495212 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.495304 kubelet[2318]: E1213 13:20:45.495288 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.536857 kubelet[2318]: W1213 13:20:45.536769 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:45.536857 kubelet[2318]: E1213 13:20:45.536845 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:46.089042 kubelet[2318]: E1213 13:20:46.088988 2318 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:46.325838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686727488.mount: Deactivated successfully. Dec 13 13:20:46.334171 containerd[1500]: time="2024-12-13T13:20:46.334104240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:20:46.336775 containerd[1500]: time="2024-12-13T13:20:46.336724630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:20:46.337794 containerd[1500]: time="2024-12-13T13:20:46.337756543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:20:46.339678 containerd[1500]: time="2024-12-13T13:20:46.339565260Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:20:46.340473 containerd[1500]: time="2024-12-13T13:20:46.340425134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:20:46.341632 containerd[1500]: time="2024-12-13T13:20:46.341583208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:20:46.342570 containerd[1500]: time="2024-12-13T13:20:46.342533054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:20:46.343470 containerd[1500]: time="2024-12-13T13:20:46.343431522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:20:46.344351 containerd[1500]: time="2024-12-13T13:20:46.344317736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.328512038s" Dec 13 13:20:46.347678 containerd[1500]: time="2024-12-13T13:20:46.347644155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.327266926s" Dec 13 13:20:46.348367 containerd[1500]: time="2024-12-13T13:20:46.348342370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.308024074s" Dec 13 13:20:46.514750 containerd[1500]: time="2024-12-13T13:20:46.514656534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:46.514750 containerd[1500]: time="2024-12-13T13:20:46.514712731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:46.514750 containerd[1500]: time="2024-12-13T13:20:46.514727028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.515318 containerd[1500]: time="2024-12-13T13:20:46.514798836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.515318 containerd[1500]: time="2024-12-13T13:20:46.515064414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:46.515318 containerd[1500]: time="2024-12-13T13:20:46.515120170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:46.515318 containerd[1500]: time="2024-12-13T13:20:46.515134808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.515318 containerd[1500]: time="2024-12-13T13:20:46.515220982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.516641 containerd[1500]: time="2024-12-13T13:20:46.514727168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:46.516641 containerd[1500]: time="2024-12-13T13:20:46.516149358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:46.516641 containerd[1500]: time="2024-12-13T13:20:46.516161260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.516641 containerd[1500]: time="2024-12-13T13:20:46.516409444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:46.537653 systemd[1]: Started cri-containerd-3840bb19aea1e39defe1e316bdf293b48b12f6db1269f254536a0b4f1f758a83.scope - libcontainer container 3840bb19aea1e39defe1e316bdf293b48b12f6db1269f254536a0b4f1f758a83. Dec 13 13:20:46.541842 systemd[1]: Started cri-containerd-9952aa4f9bfec4c11e87584250a27e65845296bfe7d7b5baf5f2cf0301854aa4.scope - libcontainer container 9952aa4f9bfec4c11e87584250a27e65845296bfe7d7b5baf5f2cf0301854aa4. Dec 13 13:20:46.544095 systemd[1]: Started cri-containerd-d7b81ad059639dc42182b067fd19700f7f95c1c062b3ae46f74ce88a2c51c02f.scope - libcontainer container d7b81ad059639dc42182b067fd19700f7f95c1c062b3ae46f74ce88a2c51c02f. Dec 13 13:20:46.574815 containerd[1500]: time="2024-12-13T13:20:46.574780563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"3840bb19aea1e39defe1e316bdf293b48b12f6db1269f254536a0b4f1f758a83\"" Dec 13 13:20:46.575538 kubelet[2318]: E1213 13:20:46.575521 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:46.579346 containerd[1500]: time="2024-12-13T13:20:46.579241732Z" level=info msg="CreateContainer within sandbox \"3840bb19aea1e39defe1e316bdf293b48b12f6db1269f254536a0b4f1f758a83\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:20:46.583147 containerd[1500]: time="2024-12-13T13:20:46.583115558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9952aa4f9bfec4c11e87584250a27e65845296bfe7d7b5baf5f2cf0301854aa4\"" Dec 13 13:20:46.584070 kubelet[2318]: E1213 13:20:46.584024 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:46.585949 containerd[1500]: time="2024-12-13T13:20:46.585918826Z" level=info msg="CreateContainer within sandbox \"9952aa4f9bfec4c11e87584250a27e65845296bfe7d7b5baf5f2cf0301854aa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:20:46.587301 containerd[1500]: time="2024-12-13T13:20:46.587278305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9de50ba2c613dbc2b4afa0e6b9d002d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7b81ad059639dc42182b067fd19700f7f95c1c062b3ae46f74ce88a2c51c02f\"" Dec 13 13:20:46.587937 kubelet[2318]: E1213 13:20:46.587919 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:46.589553 containerd[1500]: time="2024-12-13T13:20:46.589471728Z" level=info msg="CreateContainer within sandbox \"d7b81ad059639dc42182b067fd19700f7f95c1c062b3ae46f74ce88a2c51c02f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:20:46.727952 kubelet[2318]: E1213 13:20:46.727837 2318 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bf2a124bddd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:20:43.954134488 +0000 UTC m=+0.545200567,LastTimestamp:2024-12-13 13:20:43.954134488 +0000 UTC m=+0.545200567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:20:46.963539 kubelet[2318]: E1213 13:20:46.963478 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="3.2s" Dec 13 13:20:47.069025 kubelet[2318]: I1213 13:20:47.068938 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:47.069189 kubelet[2318]: E1213 13:20:47.069163 2318 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Dec 13 13:20:47.219173 kubelet[2318]: W1213 13:20:47.219151 2318 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:47.219244 kubelet[2318]: E1213 13:20:47.219181 2318 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Dec 13 13:20:47.386950 containerd[1500]: time="2024-12-13T13:20:47.386852759Z" level=info msg="CreateContainer within sandbox \"d7b81ad059639dc42182b067fd19700f7f95c1c062b3ae46f74ce88a2c51c02f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51fce5adc3b372d7bb3db43b5b91ef641223f3d09113e1ed648b51cda3ba7663\"" Dec 13 13:20:47.387493 containerd[1500]: time="2024-12-13T13:20:47.387466692Z" level=info msg="StartContainer for \"51fce5adc3b372d7bb3db43b5b91ef641223f3d09113e1ed648b51cda3ba7663\"" Dec 13 13:20:47.400610 containerd[1500]: time="2024-12-13T13:20:47.400558249Z" level=info msg="CreateContainer within sandbox \"3840bb19aea1e39defe1e316bdf293b48b12f6db1269f254536a0b4f1f758a83\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e7f47aebba138bd9d6222e58d7ada9f71bb1cae84ed817c12d7fb6640fc8d5d\"" Dec 13 13:20:47.401101 containerd[1500]: time="2024-12-13T13:20:47.401043746Z" level=info msg="StartContainer for \"5e7f47aebba138bd9d6222e58d7ada9f71bb1cae84ed817c12d7fb6640fc8d5d\"" Dec 13 13:20:47.405218 containerd[1500]: time="2024-12-13T13:20:47.405177881Z" level=info msg="CreateContainer within sandbox \"9952aa4f9bfec4c11e87584250a27e65845296bfe7d7b5baf5f2cf0301854aa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7b3eda8b47a84b188fe277c23a9d423e813ec0ba7a91c80cc798cce25b03ae3\"" Dec 13 13:20:47.406037 containerd[1500]: time="2024-12-13T13:20:47.405982577Z" level=info msg="StartContainer for \"b7b3eda8b47a84b188fe277c23a9d423e813ec0ba7a91c80cc798cce25b03ae3\"" Dec 13 13:20:47.418662 systemd[1]: Started cri-containerd-51fce5adc3b372d7bb3db43b5b91ef641223f3d09113e1ed648b51cda3ba7663.scope - libcontainer container 51fce5adc3b372d7bb3db43b5b91ef641223f3d09113e1ed648b51cda3ba7663. Dec 13 13:20:47.447691 systemd[1]: Started cri-containerd-5e7f47aebba138bd9d6222e58d7ada9f71bb1cae84ed817c12d7fb6640fc8d5d.scope - libcontainer container 5e7f47aebba138bd9d6222e58d7ada9f71bb1cae84ed817c12d7fb6640fc8d5d. Dec 13 13:20:47.449638 systemd[1]: Started cri-containerd-b7b3eda8b47a84b188fe277c23a9d423e813ec0ba7a91c80cc798cce25b03ae3.scope - libcontainer container b7b3eda8b47a84b188fe277c23a9d423e813ec0ba7a91c80cc798cce25b03ae3. Dec 13 13:20:47.474544 containerd[1500]: time="2024-12-13T13:20:47.473706412Z" level=info msg="StartContainer for \"51fce5adc3b372d7bb3db43b5b91ef641223f3d09113e1ed648b51cda3ba7663\" returns successfully" Dec 13 13:20:47.505000 containerd[1500]: time="2024-12-13T13:20:47.504947055Z" level=info msg="StartContainer for \"b7b3eda8b47a84b188fe277c23a9d423e813ec0ba7a91c80cc798cce25b03ae3\" returns successfully" Dec 13 13:20:47.505157 containerd[1500]: time="2024-12-13T13:20:47.505023201Z" level=info msg="StartContainer for \"5e7f47aebba138bd9d6222e58d7ada9f71bb1cae84ed817c12d7fb6640fc8d5d\" returns successfully" Dec 13 13:20:47.997602 kubelet[2318]: E1213 13:20:47.997439 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:48.000678 kubelet[2318]: E1213 13:20:48.000433 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:48.005601 kubelet[2318]: E1213 13:20:48.005543 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:48.821315 kubelet[2318]: E1213 13:20:48.821269 2318 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:20:48.944186 kubelet[2318]: I1213 13:20:48.944132 2318 apiserver.go:52] "Watching apiserver" Dec 13 13:20:48.959034 kubelet[2318]: I1213 13:20:48.959004 2318 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:20:49.005098 kubelet[2318]: E1213 13:20:49.005056 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:49.005622 kubelet[2318]: E1213 13:20:49.005339 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:49.005708 kubelet[2318]: E1213 13:20:49.005689 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:49.184401 kubelet[2318]: E1213 13:20:49.184277 2318 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:20:49.626164 kubelet[2318]: E1213 13:20:49.626130 2318 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:20:50.005692 kubelet[2318]: E1213 13:20:50.005662 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:50.005692 kubelet[2318]: E1213 13:20:50.005701 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:50.167015 kubelet[2318]: E1213 13:20:50.166957 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:20:50.271234 kubelet[2318]: I1213 13:20:50.271106 2318 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:50.302266 kubelet[2318]: I1213 13:20:50.302230 2318 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:20:51.218896 systemd[1]: Reloading requested from client PID 2600 ('systemctl') (unit session-9.scope)... Dec 13 13:20:51.218912 systemd[1]: Reloading... Dec 13 13:20:51.229646 kubelet[2318]: E1213 13:20:51.229617 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:51.304580 zram_generator::config[2639]: No configuration found. Dec 13 13:20:51.368187 kubelet[2318]: E1213 13:20:51.368142 2318 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:51.417254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:20:51.506668 systemd[1]: Reloading finished in 287 ms. Dec 13 13:20:51.552754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:51.558465 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:20:51.558717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:51.565932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:20:51.710066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:20:51.715570 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:20:51.762833 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:20:51.762833 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:20:51.762833 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:20:51.762833 kubelet[2684]: I1213 13:20:51.762302 2684 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:20:51.767749 kubelet[2684]: I1213 13:20:51.767650 2684 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:20:51.767749 kubelet[2684]: I1213 13:20:51.767681 2684 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:20:51.767931 kubelet[2684]: I1213 13:20:51.767896 2684 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:20:51.770125 kubelet[2684]: I1213 13:20:51.770085 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:20:51.773729 kubelet[2684]: I1213 13:20:51.773579 2684 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:20:51.782711 kubelet[2684]: I1213 13:20:51.782650 2684 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:20:51.783017 kubelet[2684]: I1213 13:20:51.782982 2684 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:20:51.783231 kubelet[2684]: I1213 13:20:51.783206 2684 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:20:51.783330 kubelet[2684]: I1213 13:20:51.783242 2684 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:20:51.783330 kubelet[2684]: I1213 13:20:51.783255 2684 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:20:51.783330 kubelet[2684]: I1213 13:20:51.783294 2684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:20:51.784053 kubelet[2684]: I1213 13:20:51.783407 2684 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:20:51.784053 kubelet[2684]: I1213 13:20:51.783429 2684 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:20:51.784053 kubelet[2684]: I1213 13:20:51.783477 2684 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:20:51.784053 kubelet[2684]: I1213 13:20:51.783502 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:20:51.784295 kubelet[2684]: I1213 13:20:51.784272 2684 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:20:51.784527 kubelet[2684]: I1213 13:20:51.784492 2684 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:20:51.787546 kubelet[2684]: I1213 13:20:51.785038 2684 server.go:1256] "Started kubelet" Dec 13 13:20:51.787714 kubelet[2684]: I1213 13:20:51.787691 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:20:51.792596 kubelet[2684]: I1213 13:20:51.792563 2684 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:20:51.795049 kubelet[2684]: I1213 13:20:51.794693 2684 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:20:51.796197 kubelet[2684]: I1213 13:20:51.796162 2684 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:20:51.796707 kubelet[2684]: I1213 13:20:51.796673 2684 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:20:51.797014 kubelet[2684]: I1213 13:20:51.796995 2684 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:20:51.807466 kubelet[2684]: I1213 13:20:51.797834 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:20:51.807466 kubelet[2684]: I1213 13:20:51.798250 2684 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:20:51.807466 kubelet[2684]: I1213 13:20:51.801892 2684 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:20:51.807466 kubelet[2684]: I1213 13:20:51.802040 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:20:51.807466 kubelet[2684]: E1213 13:20:51.803343 2684 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:20:51.807466 kubelet[2684]: I1213 13:20:51.803811 2684 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:20:51.806536 sudo[2701]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:20:51.807047 sudo[2701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:20:51.816567 kubelet[2684]: I1213 13:20:51.816533 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:20:51.818948 kubelet[2684]: I1213 13:20:51.818922 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:20:51.819009 kubelet[2684]: I1213 13:20:51.818956 2684 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:20:51.819009 kubelet[2684]: I1213 13:20:51.818987 2684 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:20:51.819149 kubelet[2684]: E1213 13:20:51.819046 2684 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:20:51.844626 kubelet[2684]: I1213 13:20:51.844591 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:20:51.844626 kubelet[2684]: I1213 13:20:51.844615 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:20:51.844626 kubelet[2684]: I1213 13:20:51.844634 2684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:20:51.844836 kubelet[2684]: I1213 13:20:51.844790 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:20:51.844836 kubelet[2684]: I1213 13:20:51.844816 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:20:51.844836 kubelet[2684]: I1213 13:20:51.844825 2684 policy_none.go:49] "None policy: Start" Dec 13 13:20:51.845450 kubelet[2684]: I1213 13:20:51.845416 2684 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:20:51.845450 kubelet[2684]: I1213 13:20:51.845439 2684 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:20:51.845672 kubelet[2684]: I1213 13:20:51.845643 2684 state_mem.go:75] "Updated machine memory state" Dec 13 13:20:51.850076 kubelet[2684]: I1213 13:20:51.850035 2684 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:20:51.850343 kubelet[2684]: I1213 13:20:51.850316 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:20:51.897965 kubelet[2684]: I1213 13:20:51.897917 2684 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:20:51.904910 kubelet[2684]: I1213 13:20:51.904819 2684 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:20:51.904910 kubelet[2684]: I1213 13:20:51.904898 2684 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:20:51.919530 kubelet[2684]: I1213 13:20:51.919480 2684 topology_manager.go:215] "Topology Admit Handler" podUID="9de50ba2c613dbc2b4afa0e6b9d002d0" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:20:51.919680 kubelet[2684]: I1213 13:20:51.919584 2684 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:20:51.919680 kubelet[2684]: I1213 13:20:51.919618 2684 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:20:51.927309 kubelet[2684]: E1213 13:20:51.927279 2684 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:51.927781 kubelet[2684]: E1213 13:20:51.927755 2684 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 13:20:52.098148 kubelet[2684]: I1213 13:20:52.098012 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:52.098148 kubelet[2684]: I1213 13:20:52.098063 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:52.098148 kubelet[2684]: I1213 13:20:52.098094 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:52.098344 kubelet[2684]: I1213 13:20:52.098177 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:52.098344 kubelet[2684]: I1213 13:20:52.098207 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:52.098344 kubelet[2684]: I1213 13:20:52.098228 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:20:52.098344 kubelet[2684]: I1213 13:20:52.098245 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:52.098344 kubelet[2684]: I1213 13:20:52.098263 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:20:52.098502 kubelet[2684]: I1213 13:20:52.098280 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de50ba2c613dbc2b4afa0e6b9d002d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9de50ba2c613dbc2b4afa0e6b9d002d0\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:52.229199 kubelet[2684]: E1213 13:20:52.229169 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.229326 kubelet[2684]: E1213 13:20:52.229258 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.229629 kubelet[2684]: E1213 13:20:52.229613 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.298032 sudo[2701]: pam_unix(sudo:session): session closed for user root Dec 13 13:20:52.784176 kubelet[2684]: I1213 13:20:52.784126 2684 apiserver.go:52] "Watching apiserver" Dec 13 13:20:52.796903 kubelet[2684]: I1213 13:20:52.796866 2684 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:20:52.927870 kubelet[2684]: E1213 13:20:52.927837 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.928003 kubelet[2684]: E1213 13:20:52.927904 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.933010 kubelet[2684]: E1213 13:20:52.932975 2684 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 13:20:52.933400 kubelet[2684]: E1213 13:20:52.933379 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:52.959277 kubelet[2684]: I1213 13:20:52.959206 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9588739739999999 podStartE2EDuration="1.958873974s" podCreationTimestamp="2024-12-13 13:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:20:52.9491256 +0000 UTC m=+1.228938216" watchObservedRunningTime="2024-12-13 13:20:52.958873974 +0000 UTC m=+1.238686590" Dec 13 13:20:52.976434 kubelet[2684]: I1213 13:20:52.976005 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.975947777 podStartE2EDuration="1.975947777s" podCreationTimestamp="2024-12-13 13:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:20:52.959385627 +0000 UTC m=+1.239198243" watchObservedRunningTime="2024-12-13 13:20:52.975947777 +0000 UTC m=+1.255760393" Dec 13 13:20:52.993105 kubelet[2684]: I1213 13:20:52.993059 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.993006191 podStartE2EDuration="1.993006191s" podCreationTimestamp="2024-12-13 13:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:20:52.976298804 +0000 UTC m=+1.256111420" watchObservedRunningTime="2024-12-13 13:20:52.993006191 +0000 UTC m=+1.272818807" Dec 13 13:20:53.784671 sudo[1689]: pam_unix(sudo:session): session closed for user root Dec 13 13:20:53.787392 sshd[1688]: Connection closed by 10.0.0.1 port 40492 Dec 13 13:20:53.788145 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:53.793353 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:40492.service: Deactivated successfully. Dec 13 13:20:53.795798 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:20:53.796062 systemd[1]: session-9.scope: Consumed 4.857s CPU time, 188.5M memory peak, 0B memory swap peak. Dec 13 13:20:53.797069 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:20:53.798365 systemd-logind[1478]: Removed session 9. Dec 13 13:20:53.930842 kubelet[2684]: E1213 13:20:53.930715 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:55.353759 kubelet[2684]: E1213 13:20:55.353723 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:56.603195 update_engine[1481]: I20241213 13:20:56.603080 1481 update_attempter.cc:509] Updating boot flags... Dec 13 13:20:56.806546 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2772) Dec 13 13:20:56.846550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2772) Dec 13 13:20:56.878578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2772) Dec 13 13:20:59.672762 kubelet[2684]: E1213 13:20:59.672720 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:59.923358 kubelet[2684]: E1213 13:20:59.923212 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:59.939718 kubelet[2684]: E1213 13:20:59.939670 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:59.939718 kubelet[2684]: E1213 13:20:59.939696 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:04.612645 kubelet[2684]: I1213 13:21:04.612594 2684 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:21:04.613236 containerd[1500]: time="2024-12-13T13:21:04.613067574Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:21:04.613475 kubelet[2684]: I1213 13:21:04.613361 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:21:05.094930 kubelet[2684]: I1213 13:21:05.091938 2684 topology_manager.go:215] "Topology Admit Handler" podUID="c404d641-5c81-4140-a478-a464302899f8" podNamespace="kube-system" podName="kube-proxy-7fxcg" Dec 13 13:21:05.096211 kubelet[2684]: I1213 13:21:05.095690 2684 topology_manager.go:215] "Topology Admit Handler" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" podNamespace="kube-system" podName="cilium-5fwjb" Dec 13 13:21:05.101622 systemd[1]: Created slice kubepods-besteffort-podc404d641_5c81_4140_a478_a464302899f8.slice - libcontainer container kubepods-besteffort-podc404d641_5c81_4140_a478_a464302899f8.slice. Dec 13 13:21:05.118194 systemd[1]: Created slice kubepods-burstable-pod837f3459_b455_4ddf_a7db_4c5ec4e40f22.slice - libcontainer container kubepods-burstable-pod837f3459_b455_4ddf_a7db_4c5ec4e40f22.slice. Dec 13 13:21:05.172426 kubelet[2684]: I1213 13:21:05.172363 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-etc-cni-netd\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172611 kubelet[2684]: I1213 13:21:05.172459 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkvj\" (UniqueName: \"kubernetes.io/projected/c404d641-5c81-4140-a478-a464302899f8-kube-api-access-hxkvj\") pod \"kube-proxy-7fxcg\" (UID: \"c404d641-5c81-4140-a478-a464302899f8\") " pod="kube-system/kube-proxy-7fxcg" Dec 13 13:21:05.172611 kubelet[2684]: I1213 13:21:05.172494 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-bpf-maps\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172611 kubelet[2684]: I1213 13:21:05.172538 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-cgroup\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172611 kubelet[2684]: I1213 13:21:05.172563 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-xtables-lock\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172611 kubelet[2684]: I1213 13:21:05.172589 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-config-path\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172791 kubelet[2684]: I1213 13:21:05.172641 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-run\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172791 kubelet[2684]: I1213 13:21:05.172719 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c404d641-5c81-4140-a478-a464302899f8-xtables-lock\") pod \"kube-proxy-7fxcg\" (UID: \"c404d641-5c81-4140-a478-a464302899f8\") " pod="kube-system/kube-proxy-7fxcg" Dec 13 13:21:05.172791 kubelet[2684]: I1213 13:21:05.172754 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hostproc\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172791 kubelet[2684]: I1213 13:21:05.172788 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cni-path\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172919 kubelet[2684]: I1213 13:21:05.172815 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-kernel\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172919 kubelet[2684]: I1213 13:21:05.172840 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hubble-tls\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.172919 kubelet[2684]: I1213 13:21:05.172886 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rnc4\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.173044 kubelet[2684]: I1213 13:21:05.172933 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c404d641-5c81-4140-a478-a464302899f8-kube-proxy\") pod \"kube-proxy-7fxcg\" (UID: \"c404d641-5c81-4140-a478-a464302899f8\") " pod="kube-system/kube-proxy-7fxcg" Dec 13 13:21:05.173044 kubelet[2684]: I1213 13:21:05.172961 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c404d641-5c81-4140-a478-a464302899f8-lib-modules\") pod \"kube-proxy-7fxcg\" (UID: \"c404d641-5c81-4140-a478-a464302899f8\") " pod="kube-system/kube-proxy-7fxcg" Dec 13 13:21:05.173044 kubelet[2684]: I1213 13:21:05.172983 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-lib-modules\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.173044 kubelet[2684]: I1213 13:21:05.173006 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/837f3459-b455-4ddf-a7db-4c5ec4e40f22-clustermesh-secrets\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.173044 kubelet[2684]: I1213 13:21:05.173042 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-net\") pod \"cilium-5fwjb\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " pod="kube-system/cilium-5fwjb" Dec 13 13:21:05.279269 kubelet[2684]: E1213 13:21:05.279228 2684 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:21:05.279269 kubelet[2684]: E1213 13:21:05.279265 2684 projected.go:200] Error preparing data for projected volume kube-api-access-4rnc4 for pod kube-system/cilium-5fwjb: configmap "kube-root-ca.crt" not found Dec 13 13:21:05.279421 kubelet[2684]: E1213 13:21:05.279339 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4 podName:837f3459-b455-4ddf-a7db-4c5ec4e40f22 nodeName:}" failed. No retries permitted until 2024-12-13 13:21:05.779316209 +0000 UTC m=+14.059128825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4rnc4" (UniqueName: "kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4") pod "cilium-5fwjb" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22") : configmap "kube-root-ca.crt" not found Dec 13 13:21:05.281426 kubelet[2684]: E1213 13:21:05.281394 2684 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:21:05.281426 kubelet[2684]: E1213 13:21:05.281418 2684 projected.go:200] Error preparing data for projected volume kube-api-access-hxkvj for pod kube-system/kube-proxy-7fxcg: configmap "kube-root-ca.crt" not found Dec 13 13:21:05.281555 kubelet[2684]: E1213 13:21:05.281456 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c404d641-5c81-4140-a478-a464302899f8-kube-api-access-hxkvj podName:c404d641-5c81-4140-a478-a464302899f8 nodeName:}" failed. No retries permitted until 2024-12-13 13:21:05.781441709 +0000 UTC m=+14.061254325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hxkvj" (UniqueName: "kubernetes.io/projected/c404d641-5c81-4140-a478-a464302899f8-kube-api-access-hxkvj") pod "kube-proxy-7fxcg" (UID: "c404d641-5c81-4140-a478-a464302899f8") : configmap "kube-root-ca.crt" not found Dec 13 13:21:05.357557 kubelet[2684]: E1213 13:21:05.357425 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:05.446534 kubelet[2684]: I1213 13:21:05.445498 2684 topology_manager.go:215] "Topology Admit Handler" podUID="c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" podNamespace="kube-system" podName="cilium-operator-5cc964979-4k7dw" Dec 13 13:21:05.462703 systemd[1]: Created slice kubepods-besteffort-podc4784b1c_6dbd_4df5_b83f_51e119f0a2b5.slice - libcontainer container kubepods-besteffort-podc4784b1c_6dbd_4df5_b83f_51e119f0a2b5.slice. Dec 13 13:21:05.476582 kubelet[2684]: I1213 13:21:05.476527 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-cilium-config-path\") pod \"cilium-operator-5cc964979-4k7dw\" (UID: \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\") " pod="kube-system/cilium-operator-5cc964979-4k7dw" Dec 13 13:21:05.476707 kubelet[2684]: I1213 13:21:05.476604 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2lc\" (UniqueName: \"kubernetes.io/projected/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-kube-api-access-9s2lc\") pod \"cilium-operator-5cc964979-4k7dw\" (UID: \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\") " pod="kube-system/cilium-operator-5cc964979-4k7dw" Dec 13 13:21:05.765684 kubelet[2684]: E1213 13:21:05.765639 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:05.766354 containerd[1500]: time="2024-12-13T13:21:05.766258356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4k7dw,Uid:c4784b1c-6dbd-4df5-b83f-51e119f0a2b5,Namespace:kube-system,Attempt:0,}" Dec 13 13:21:05.795461 containerd[1500]: time="2024-12-13T13:21:05.795285549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:21:05.795461 containerd[1500]: time="2024-12-13T13:21:05.795382943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:21:05.795461 containerd[1500]: time="2024-12-13T13:21:05.795402990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:05.796406 containerd[1500]: time="2024-12-13T13:21:05.796344556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:05.816672 systemd[1]: Started cri-containerd-104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f.scope - libcontainer container 104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f. Dec 13 13:21:05.854301 containerd[1500]: time="2024-12-13T13:21:05.854168969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4k7dw,Uid:c4784b1c-6dbd-4df5-b83f-51e119f0a2b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\"" Dec 13 13:21:05.855137 kubelet[2684]: E1213 13:21:05.855100 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:05.857258 containerd[1500]: time="2024-12-13T13:21:05.857206288Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:21:06.016631 kubelet[2684]: E1213 13:21:06.016425 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:06.017030 containerd[1500]: time="2024-12-13T13:21:06.016954924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fxcg,Uid:c404d641-5c81-4140-a478-a464302899f8,Namespace:kube-system,Attempt:0,}" Dec 13 13:21:06.021038 kubelet[2684]: E1213 13:21:06.020762 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:06.021643 containerd[1500]: time="2024-12-13T13:21:06.021372904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fwjb,Uid:837f3459-b455-4ddf-a7db-4c5ec4e40f22,Namespace:kube-system,Attempt:0,}" Dec 13 13:21:06.045557 containerd[1500]: time="2024-12-13T13:21:06.045067643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:21:06.045557 containerd[1500]: time="2024-12-13T13:21:06.045127055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:21:06.045990 containerd[1500]: time="2024-12-13T13:21:06.045933155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:06.047047 containerd[1500]: time="2024-12-13T13:21:06.047001320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:06.053552 containerd[1500]: time="2024-12-13T13:21:06.053451151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:21:06.053674 containerd[1500]: time="2024-12-13T13:21:06.053535650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:21:06.053674 containerd[1500]: time="2024-12-13T13:21:06.053550618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:06.053674 containerd[1500]: time="2024-12-13T13:21:06.053639035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:06.072674 systemd[1]: Started cri-containerd-25842eb6e8067ce419aa76caf80ddb162a9c8445015141fb500dfda5ece7e6ed.scope - libcontainer container 25842eb6e8067ce419aa76caf80ddb162a9c8445015141fb500dfda5ece7e6ed. Dec 13 13:21:06.076209 systemd[1]: Started cri-containerd-3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a.scope - libcontainer container 3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a. Dec 13 13:21:06.097373 containerd[1500]: time="2024-12-13T13:21:06.097337634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fxcg,Uid:c404d641-5c81-4140-a478-a464302899f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"25842eb6e8067ce419aa76caf80ddb162a9c8445015141fb500dfda5ece7e6ed\"" Dec 13 13:21:06.098119 kubelet[2684]: E1213 13:21:06.098096 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:06.100388 containerd[1500]: time="2024-12-13T13:21:06.100356507Z" level=info msg="CreateContainer within sandbox \"25842eb6e8067ce419aa76caf80ddb162a9c8445015141fb500dfda5ece7e6ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:21:06.105115 containerd[1500]: time="2024-12-13T13:21:06.105080154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fwjb,Uid:837f3459-b455-4ddf-a7db-4c5ec4e40f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\"" Dec 13 13:21:06.106150 kubelet[2684]: E1213 13:21:06.106119 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:06.120644 containerd[1500]: time="2024-12-13T13:21:06.120602593Z" level=info msg="CreateContainer within sandbox \"25842eb6e8067ce419aa76caf80ddb162a9c8445015141fb500dfda5ece7e6ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01416fb8f8458cd707ba52c237d1a1d2d48698885c2f97527afa9056fb9c25ec\"" Dec 13 13:21:06.121162 containerd[1500]: time="2024-12-13T13:21:06.121140006Z" level=info msg="StartContainer for \"01416fb8f8458cd707ba52c237d1a1d2d48698885c2f97527afa9056fb9c25ec\"" Dec 13 13:21:06.156720 systemd[1]: Started cri-containerd-01416fb8f8458cd707ba52c237d1a1d2d48698885c2f97527afa9056fb9c25ec.scope - libcontainer container 01416fb8f8458cd707ba52c237d1a1d2d48698885c2f97527afa9056fb9c25ec. Dec 13 13:21:06.188399 containerd[1500]: time="2024-12-13T13:21:06.188344768Z" level=info msg="StartContainer for \"01416fb8f8458cd707ba52c237d1a1d2d48698885c2f97527afa9056fb9c25ec\" returns successfully" Dec 13 13:21:06.952205 kubelet[2684]: E1213 13:21:06.952162 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:06.962119 kubelet[2684]: I1213 13:21:06.962056 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7fxcg" podStartSLOduration=1.962016034 podStartE2EDuration="1.962016034s" podCreationTimestamp="2024-12-13 13:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:21:06.961752517 +0000 UTC m=+15.241565134" watchObservedRunningTime="2024-12-13 13:21:06.962016034 +0000 UTC m=+15.241828650" Dec 13 13:21:08.593015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127389444.mount: Deactivated successfully. Dec 13 13:21:08.886008 containerd[1500]: time="2024-12-13T13:21:08.885864802Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:21:08.886582 containerd[1500]: time="2024-12-13T13:21:08.886503254Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906597" Dec 13 13:21:08.887781 containerd[1500]: time="2024-12-13T13:21:08.887751997Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:21:08.889039 containerd[1500]: time="2024-12-13T13:21:08.889005319Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.031755999s" Dec 13 13:21:08.889039 containerd[1500]: time="2024-12-13T13:21:08.889031067Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:21:08.892251 containerd[1500]: time="2024-12-13T13:21:08.892228543Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:21:08.893419 containerd[1500]: time="2024-12-13T13:21:08.893372228Z" level=info msg="CreateContainer within sandbox \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:21:08.907481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435348009.mount: Deactivated successfully. Dec 13 13:21:08.909520 containerd[1500]: time="2024-12-13T13:21:08.909473048Z" level=info msg="CreateContainer within sandbox \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\"" Dec 13 13:21:08.910082 containerd[1500]: time="2024-12-13T13:21:08.910046429Z" level=info msg="StartContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\"" Dec 13 13:21:08.937645 systemd[1]: Started cri-containerd-dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d.scope - libcontainer container dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d. Dec 13 13:21:08.965293 containerd[1500]: time="2024-12-13T13:21:08.965248934Z" level=info msg="StartContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" returns successfully" Dec 13 13:21:09.958785 kubelet[2684]: E1213 13:21:09.958736 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:09.972886 kubelet[2684]: I1213 13:21:09.972843 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4k7dw" podStartSLOduration=1.937475224 podStartE2EDuration="4.972798096s" podCreationTimestamp="2024-12-13 13:21:05 +0000 UTC" firstStartedPulling="2024-12-13 13:21:05.856665028 +0000 UTC m=+14.136477644" lastFinishedPulling="2024-12-13 13:21:08.8919879 +0000 UTC m=+17.171800516" observedRunningTime="2024-12-13 13:21:09.972727253 +0000 UTC m=+18.252539869" watchObservedRunningTime="2024-12-13 13:21:09.972798096 +0000 UTC m=+18.252610702" Dec 13 13:21:10.959942 kubelet[2684]: E1213 13:21:10.959896 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:18.895167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750395047.mount: Deactivated successfully. Dec 13 13:21:19.992607 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:34106.service - OpenSSH per-connection server daemon (10.0.0.1:34106). Dec 13 13:21:20.047420 sshd[3126]: Accepted publickey for core from 10.0.0.1 port 34106 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:20.049221 sshd-session[3126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:20.054682 systemd-logind[1478]: New session 10 of user core. Dec 13 13:21:20.061739 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:21:20.204367 sshd[3136]: Connection closed by 10.0.0.1 port 34106 Dec 13 13:21:20.205704 sshd-session[3126]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:20.209308 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:34106.service: Deactivated successfully. Dec 13 13:21:20.211780 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:21:20.213460 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:21:20.214847 systemd-logind[1478]: Removed session 10. Dec 13 13:21:22.901839 containerd[1500]: time="2024-12-13T13:21:22.901779481Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:21:22.904725 containerd[1500]: time="2024-12-13T13:21:22.904676986Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734651" Dec 13 13:21:22.905912 containerd[1500]: time="2024-12-13T13:21:22.905878414Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:21:22.907530 containerd[1500]: time="2024-12-13T13:21:22.907482959Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.015175176s" Dec 13 13:21:22.907571 containerd[1500]: time="2024-12-13T13:21:22.907526531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:21:22.909535 containerd[1500]: time="2024-12-13T13:21:22.909133401Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:21:22.921712 containerd[1500]: time="2024-12-13T13:21:22.921662566Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\"" Dec 13 13:21:22.922117 containerd[1500]: time="2024-12-13T13:21:22.922087315Z" level=info msg="StartContainer for \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\"" Dec 13 13:21:22.956662 systemd[1]: Started cri-containerd-76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af.scope - libcontainer container 76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af. Dec 13 13:21:22.986092 containerd[1500]: time="2024-12-13T13:21:22.986053762Z" level=info msg="StartContainer for \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\" returns successfully" Dec 13 13:21:22.996554 systemd[1]: cri-containerd-76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af.scope: Deactivated successfully. Dec 13 13:21:23.655013 containerd[1500]: time="2024-12-13T13:21:23.654939524Z" level=info msg="shim disconnected" id=76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af namespace=k8s.io Dec 13 13:21:23.655013 containerd[1500]: time="2024-12-13T13:21:23.654997845Z" level=warning msg="cleaning up after shim disconnected" id=76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af namespace=k8s.io Dec 13 13:21:23.655013 containerd[1500]: time="2024-12-13T13:21:23.655007793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:21:23.918137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af-rootfs.mount: Deactivated successfully. Dec 13 13:21:23.980275 kubelet[2684]: E1213 13:21:23.980249 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:23.982831 containerd[1500]: time="2024-12-13T13:21:23.982104964Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:21:24.314125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567611045.mount: Deactivated successfully. Dec 13 13:21:24.486483 containerd[1500]: time="2024-12-13T13:21:24.486424310Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\"" Dec 13 13:21:24.486867 containerd[1500]: time="2024-12-13T13:21:24.486840121Z" level=info msg="StartContainer for \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\"" Dec 13 13:21:24.518750 systemd[1]: Started cri-containerd-301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e.scope - libcontainer container 301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e. Dec 13 13:21:24.555910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:21:24.556131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:21:24.556195 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:21:24.566825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:21:24.567107 systemd[1]: cri-containerd-301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e.scope: Deactivated successfully. Dec 13 13:21:24.580926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:21:24.583617 containerd[1500]: time="2024-12-13T13:21:24.583587221Z" level=info msg="StartContainer for \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\" returns successfully" Dec 13 13:21:24.672930 containerd[1500]: time="2024-12-13T13:21:24.672867172Z" level=info msg="shim disconnected" id=301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e namespace=k8s.io Dec 13 13:21:24.672930 containerd[1500]: time="2024-12-13T13:21:24.672919890Z" level=warning msg="cleaning up after shim disconnected" id=301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e namespace=k8s.io Dec 13 13:21:24.672930 containerd[1500]: time="2024-12-13T13:21:24.672932274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:21:24.917808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e-rootfs.mount: Deactivated successfully. Dec 13 13:21:24.983774 kubelet[2684]: E1213 13:21:24.983730 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:24.986933 containerd[1500]: time="2024-12-13T13:21:24.986217898Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:21:25.007063 containerd[1500]: time="2024-12-13T13:21:25.007012517Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\"" Dec 13 13:21:25.007654 containerd[1500]: time="2024-12-13T13:21:25.007600060Z" level=info msg="StartContainer for \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\"" Dec 13 13:21:25.035688 systemd[1]: Started cri-containerd-4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce.scope - libcontainer container 4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce. Dec 13 13:21:25.066771 systemd[1]: cri-containerd-4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce.scope: Deactivated successfully. Dec 13 13:21:25.069370 containerd[1500]: time="2024-12-13T13:21:25.069332482Z" level=info msg="StartContainer for \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\" returns successfully" Dec 13 13:21:25.118051 containerd[1500]: time="2024-12-13T13:21:25.117988599Z" level=info msg="shim disconnected" id=4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce namespace=k8s.io Dec 13 13:21:25.118051 containerd[1500]: time="2024-12-13T13:21:25.118042089Z" level=warning msg="cleaning up after shim disconnected" id=4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce namespace=k8s.io Dec 13 13:21:25.118051 containerd[1500]: time="2024-12-13T13:21:25.118050755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:21:25.216035 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:34114.service - OpenSSH per-connection server daemon (10.0.0.1:34114). Dec 13 13:21:25.260309 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 34114 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:25.261766 sshd-session[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:25.265937 systemd-logind[1478]: New session 11 of user core. Dec 13 13:21:25.276645 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:21:25.393031 sshd[3350]: Connection closed by 10.0.0.1 port 34114 Dec 13 13:21:25.393377 sshd-session[3348]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:25.397494 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:34114.service: Deactivated successfully. Dec 13 13:21:25.399977 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:21:25.400872 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:21:25.401867 systemd-logind[1478]: Removed session 11. Dec 13 13:21:25.917784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce-rootfs.mount: Deactivated successfully. Dec 13 13:21:25.987292 kubelet[2684]: E1213 13:21:25.987264 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:25.989540 containerd[1500]: time="2024-12-13T13:21:25.989306392Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:21:26.186841 containerd[1500]: time="2024-12-13T13:21:26.186709202Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\"" Dec 13 13:21:26.187320 containerd[1500]: time="2024-12-13T13:21:26.187292528Z" level=info msg="StartContainer for \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\"" Dec 13 13:21:26.215657 systemd[1]: Started cri-containerd-593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655.scope - libcontainer container 593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655. Dec 13 13:21:26.237826 systemd[1]: cri-containerd-593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655.scope: Deactivated successfully. Dec 13 13:21:26.351489 containerd[1500]: time="2024-12-13T13:21:26.351437124Z" level=info msg="StartContainer for \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\" returns successfully" Dec 13 13:21:26.522893 containerd[1500]: time="2024-12-13T13:21:26.522829654Z" level=info msg="shim disconnected" id=593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655 namespace=k8s.io Dec 13 13:21:26.522893 containerd[1500]: time="2024-12-13T13:21:26.522886651Z" level=warning msg="cleaning up after shim disconnected" id=593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655 namespace=k8s.io Dec 13 13:21:26.522893 containerd[1500]: time="2024-12-13T13:21:26.522897301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:21:26.918212 systemd[1]: run-containerd-runc-k8s.io-593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655-runc.q5K7qh.mount: Deactivated successfully. Dec 13 13:21:26.918348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655-rootfs.mount: Deactivated successfully. Dec 13 13:21:26.990959 kubelet[2684]: E1213 13:21:26.990899 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:26.993158 containerd[1500]: time="2024-12-13T13:21:26.993106590Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:21:27.549275 containerd[1500]: time="2024-12-13T13:21:27.549221533Z" level=info msg="CreateContainer within sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\"" Dec 13 13:21:27.549956 containerd[1500]: time="2024-12-13T13:21:27.549718107Z" level=info msg="StartContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\"" Dec 13 13:21:27.585842 systemd[1]: Started cri-containerd-ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad.scope - libcontainer container ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad. Dec 13 13:21:27.672059 containerd[1500]: time="2024-12-13T13:21:27.671985266Z" level=info msg="StartContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" returns successfully" Dec 13 13:21:27.816254 kubelet[2684]: I1213 13:21:27.816118 2684 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:21:27.937574 kubelet[2684]: I1213 13:21:27.937367 2684 topology_manager.go:215] "Topology Admit Handler" podUID="660df7ef-3e09-4031-943a-70598873a396" podNamespace="kube-system" podName="coredns-76f75df574-p5tlz" Dec 13 13:21:27.940316 kubelet[2684]: I1213 13:21:27.940286 2684 topology_manager.go:215] "Topology Admit Handler" podUID="8faa43a6-bfce-4736-82c4-5876f0084452" podNamespace="kube-system" podName="coredns-76f75df574-xd6n2" Dec 13 13:21:27.950668 systemd[1]: Created slice kubepods-burstable-pod660df7ef_3e09_4031_943a_70598873a396.slice - libcontainer container kubepods-burstable-pod660df7ef_3e09_4031_943a_70598873a396.slice. Dec 13 13:21:27.958666 systemd[1]: Created slice kubepods-burstable-pod8faa43a6_bfce_4736_82c4_5876f0084452.slice - libcontainer container kubepods-burstable-pod8faa43a6_bfce_4736_82c4_5876f0084452.slice. Dec 13 13:21:27.995658 kubelet[2684]: E1213 13:21:27.995468 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:28.024554 kubelet[2684]: I1213 13:21:28.024469 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8faa43a6-bfce-4736-82c4-5876f0084452-config-volume\") pod \"coredns-76f75df574-xd6n2\" (UID: \"8faa43a6-bfce-4736-82c4-5876f0084452\") " pod="kube-system/coredns-76f75df574-xd6n2" Dec 13 13:21:28.024703 kubelet[2684]: I1213 13:21:28.024580 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgf5\" (UniqueName: \"kubernetes.io/projected/660df7ef-3e09-4031-943a-70598873a396-kube-api-access-mvgf5\") pod \"coredns-76f75df574-p5tlz\" (UID: \"660df7ef-3e09-4031-943a-70598873a396\") " pod="kube-system/coredns-76f75df574-p5tlz" Dec 13 13:21:28.025227 kubelet[2684]: I1213 13:21:28.024962 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/660df7ef-3e09-4031-943a-70598873a396-config-volume\") pod \"coredns-76f75df574-p5tlz\" (UID: \"660df7ef-3e09-4031-943a-70598873a396\") " pod="kube-system/coredns-76f75df574-p5tlz" Dec 13 13:21:28.025227 kubelet[2684]: I1213 13:21:28.025015 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gfs\" (UniqueName: \"kubernetes.io/projected/8faa43a6-bfce-4736-82c4-5876f0084452-kube-api-access-24gfs\") pod \"coredns-76f75df574-xd6n2\" (UID: \"8faa43a6-bfce-4736-82c4-5876f0084452\") " pod="kube-system/coredns-76f75df574-xd6n2" Dec 13 13:21:28.254731 kubelet[2684]: E1213 13:21:28.254679 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:28.255316 containerd[1500]: time="2024-12-13T13:21:28.255278507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p5tlz,Uid:660df7ef-3e09-4031-943a-70598873a396,Namespace:kube-system,Attempt:0,}" Dec 13 13:21:28.262059 kubelet[2684]: E1213 13:21:28.262041 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:28.281908 containerd[1500]: time="2024-12-13T13:21:28.262385665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xd6n2,Uid:8faa43a6-bfce-4736-82c4-5876f0084452,Namespace:kube-system,Attempt:0,}" Dec 13 13:21:28.997508 kubelet[2684]: E1213 13:21:28.997466 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:29.802754 systemd-networkd[1410]: cilium_host: Link UP Dec 13 13:21:29.802953 systemd-networkd[1410]: cilium_net: Link UP Dec 13 13:21:29.802956 systemd-networkd[1410]: cilium_net: Gained carrier Dec 13 13:21:29.803165 systemd-networkd[1410]: cilium_host: Gained carrier Dec 13 13:21:29.803361 systemd-networkd[1410]: cilium_host: Gained IPv6LL Dec 13 13:21:29.911942 systemd-networkd[1410]: cilium_vxlan: Link UP Dec 13 13:21:29.911953 systemd-networkd[1410]: cilium_vxlan: Gained carrier Dec 13 13:21:29.999204 kubelet[2684]: E1213 13:21:29.999175 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:30.128538 kernel: NET: Registered PF_ALG protocol family Dec 13 13:21:30.405222 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:33304.service - OpenSSH per-connection server daemon (10.0.0.1:33304). Dec 13 13:21:30.453251 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 33304 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:30.455071 sshd-session[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:30.459286 systemd-logind[1478]: New session 12 of user core. Dec 13 13:21:30.464650 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:21:30.551871 systemd-networkd[1410]: cilium_net: Gained IPv6LL Dec 13 13:21:30.579256 sshd[3774]: Connection closed by 10.0.0.1 port 33304 Dec 13 13:21:30.579642 sshd-session[3738]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:30.583348 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:33304.service: Deactivated successfully. Dec 13 13:21:30.585676 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:21:30.586343 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:21:30.587170 systemd-logind[1478]: Removed session 12. Dec 13 13:21:30.798304 systemd-networkd[1410]: lxc_health: Link UP Dec 13 13:21:30.804369 systemd-networkd[1410]: lxc_health: Gained carrier Dec 13 13:21:30.915061 systemd-networkd[1410]: lxcf7578a6ab3a8: Link UP Dec 13 13:21:30.930765 systemd-networkd[1410]: lxc6f05aadfde9d: Link UP Dec 13 13:21:30.940537 kernel: eth0: renamed from tmp9d839 Dec 13 13:21:30.948148 systemd-networkd[1410]: lxc6f05aadfde9d: Gained carrier Dec 13 13:21:30.949850 kernel: eth0: renamed from tmpbba94 Dec 13 13:21:30.952646 systemd-networkd[1410]: lxcf7578a6ab3a8: Gained carrier Dec 13 13:21:30.999690 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL Dec 13 13:21:32.022888 kubelet[2684]: E1213 13:21:32.022469 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:32.087643 systemd-networkd[1410]: lxc6f05aadfde9d: Gained IPv6LL Dec 13 13:21:32.105955 kubelet[2684]: I1213 13:21:32.105928 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5fwjb" podStartSLOduration=10.304846431 podStartE2EDuration="27.105859544s" podCreationTimestamp="2024-12-13 13:21:05 +0000 UTC" firstStartedPulling="2024-12-13 13:21:06.106757206 +0000 UTC m=+14.386569822" lastFinishedPulling="2024-12-13 13:21:22.907770319 +0000 UTC m=+31.187582935" observedRunningTime="2024-12-13 13:21:28.010369901 +0000 UTC m=+36.290182517" watchObservedRunningTime="2024-12-13 13:21:32.105859544 +0000 UTC m=+40.385672150" Dec 13 13:21:32.535676 systemd-networkd[1410]: lxcf7578a6ab3a8: Gained IPv6LL Dec 13 13:21:32.791758 systemd-networkd[1410]: lxc_health: Gained IPv6LL Dec 13 13:21:33.004286 kubelet[2684]: E1213 13:21:33.004253 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:34.005129 kubelet[2684]: E1213 13:21:34.005095 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:34.314581 containerd[1500]: time="2024-12-13T13:21:34.314196210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:21:34.314581 containerd[1500]: time="2024-12-13T13:21:34.314293873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:21:34.314581 containerd[1500]: time="2024-12-13T13:21:34.314311256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:34.315573 containerd[1500]: time="2024-12-13T13:21:34.315423995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:34.315573 containerd[1500]: time="2024-12-13T13:21:34.315310532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:21:34.315573 containerd[1500]: time="2024-12-13T13:21:34.315368481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:21:34.315573 containerd[1500]: time="2024-12-13T13:21:34.315381385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:34.315573 containerd[1500]: time="2024-12-13T13:21:34.315468638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:21:34.346658 systemd[1]: Started cri-containerd-9d8391dd572c2ef4b913f72ae787a6326ee50b5ac7db381a94fdbca46c4d637c.scope - libcontainer container 9d8391dd572c2ef4b913f72ae787a6326ee50b5ac7db381a94fdbca46c4d637c. Dec 13 13:21:34.348346 systemd[1]: Started cri-containerd-bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd.scope - libcontainer container bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd. Dec 13 13:21:34.360155 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:21:34.362524 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:21:34.385330 containerd[1500]: time="2024-12-13T13:21:34.385240263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xd6n2,Uid:8faa43a6-bfce-4736-82c4-5876f0084452,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d8391dd572c2ef4b913f72ae787a6326ee50b5ac7db381a94fdbca46c4d637c\"" Dec 13 13:21:34.385988 kubelet[2684]: E1213 13:21:34.385956 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:34.388652 containerd[1500]: time="2024-12-13T13:21:34.388573832Z" level=info msg="CreateContainer within sandbox \"9d8391dd572c2ef4b913f72ae787a6326ee50b5ac7db381a94fdbca46c4d637c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:21:34.389711 containerd[1500]: time="2024-12-13T13:21:34.389678616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p5tlz,Uid:660df7ef-3e09-4031-943a-70598873a396,Namespace:kube-system,Attempt:0,} returns sandbox id \"bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd\"" Dec 13 13:21:34.391642 kubelet[2684]: E1213 13:21:34.391621 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:34.393387 containerd[1500]: time="2024-12-13T13:21:34.393362221Z" level=info msg="CreateContainer within sandbox \"bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:21:34.412123 containerd[1500]: time="2024-12-13T13:21:34.412066455Z" level=info msg="CreateContainer within sandbox \"9d8391dd572c2ef4b913f72ae787a6326ee50b5ac7db381a94fdbca46c4d637c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dba9e8aec2cfa040547f4a95bea6ba197ea0814be343f3d651b8313bda577c5d\"" Dec 13 13:21:34.412696 containerd[1500]: time="2024-12-13T13:21:34.412548630Z" level=info msg="StartContainer for \"dba9e8aec2cfa040547f4a95bea6ba197ea0814be343f3d651b8313bda577c5d\"" Dec 13 13:21:34.435127 containerd[1500]: time="2024-12-13T13:21:34.435062736Z" level=info msg="CreateContainer within sandbox \"bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f04913675cc3b6265e26fe84bc18b3674db7437dc75b9a3e447d58ddca87ecf6\"" Dec 13 13:21:34.435725 containerd[1500]: time="2024-12-13T13:21:34.435680016Z" level=info msg="StartContainer for \"f04913675cc3b6265e26fe84bc18b3674db7437dc75b9a3e447d58ddca87ecf6\"" Dec 13 13:21:34.464696 systemd[1]: Started cri-containerd-dba9e8aec2cfa040547f4a95bea6ba197ea0814be343f3d651b8313bda577c5d.scope - libcontainer container dba9e8aec2cfa040547f4a95bea6ba197ea0814be343f3d651b8313bda577c5d. Dec 13 13:21:34.483824 systemd[1]: Started cri-containerd-f04913675cc3b6265e26fe84bc18b3674db7437dc75b9a3e447d58ddca87ecf6.scope - libcontainer container f04913675cc3b6265e26fe84bc18b3674db7437dc75b9a3e447d58ddca87ecf6. Dec 13 13:21:34.509544 containerd[1500]: time="2024-12-13T13:21:34.509498939Z" level=info msg="StartContainer for \"dba9e8aec2cfa040547f4a95bea6ba197ea0814be343f3d651b8313bda577c5d\" returns successfully" Dec 13 13:21:34.513958 containerd[1500]: time="2024-12-13T13:21:34.513868092Z" level=info msg="StartContainer for \"f04913675cc3b6265e26fe84bc18b3674db7437dc75b9a3e447d58ddca87ecf6\" returns successfully" Dec 13 13:21:35.008105 kubelet[2684]: E1213 13:21:35.008067 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:35.011100 kubelet[2684]: E1213 13:21:35.011060 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:35.039233 kubelet[2684]: I1213 13:21:35.038185 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xd6n2" podStartSLOduration=30.038142038 podStartE2EDuration="30.038142038s" podCreationTimestamp="2024-12-13 13:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:21:35.025084917 +0000 UTC m=+43.304897553" watchObservedRunningTime="2024-12-13 13:21:35.038142038 +0000 UTC m=+43.317954654" Dec 13 13:21:35.055077 kubelet[2684]: I1213 13:21:35.055022 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p5tlz" podStartSLOduration=30.054977763 podStartE2EDuration="30.054977763s" podCreationTimestamp="2024-12-13 13:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:21:35.039663083 +0000 UTC m=+43.319475699" watchObservedRunningTime="2024-12-13 13:21:35.054977763 +0000 UTC m=+43.334790379" Dec 13 13:21:35.320642 systemd[1]: run-containerd-runc-k8s.io-bba94229af5bd8897ec2374cec27a84f1ee0d6c63432950f075487fc08d12afd-runc.16KYWO.mount: Deactivated successfully. Dec 13 13:21:35.593585 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:33318.service - OpenSSH per-connection server daemon (10.0.0.1:33318). Dec 13 13:21:35.638031 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 33318 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:35.639558 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:35.643642 systemd-logind[1478]: New session 13 of user core. Dec 13 13:21:35.653610 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:21:35.771357 sshd[4130]: Connection closed by 10.0.0.1 port 33318 Dec 13 13:21:35.771803 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:35.779908 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:33318.service: Deactivated successfully. Dec 13 13:21:35.781653 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:21:35.783239 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:21:35.787854 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:33326.service - OpenSSH per-connection server daemon (10.0.0.1:33326). Dec 13 13:21:35.788819 systemd-logind[1478]: Removed session 13. Dec 13 13:21:35.829969 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 33326 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:35.831468 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:35.835204 systemd-logind[1478]: New session 14 of user core. Dec 13 13:21:35.844635 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:21:35.980975 sshd[4145]: Connection closed by 10.0.0.1 port 33326 Dec 13 13:21:35.981579 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:35.993557 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:33326.service: Deactivated successfully. Dec 13 13:21:35.996024 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:21:35.998723 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:21:36.010852 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:33334.service - OpenSSH per-connection server daemon (10.0.0.1:33334). Dec 13 13:21:36.011993 systemd-logind[1478]: Removed session 14. Dec 13 13:21:36.013580 kubelet[2684]: E1213 13:21:36.013560 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:36.013862 kubelet[2684]: E1213 13:21:36.013629 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:36.051020 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 33334 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:36.052444 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:36.056734 systemd-logind[1478]: New session 15 of user core. Dec 13 13:21:36.070645 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:21:36.177062 sshd[4157]: Connection closed by 10.0.0.1 port 33334 Dec 13 13:21:36.177336 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:36.180903 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:33334.service: Deactivated successfully. Dec 13 13:21:36.182836 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:21:36.183486 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:21:36.184375 systemd-logind[1478]: Removed session 15. Dec 13 13:21:37.015151 kubelet[2684]: E1213 13:21:37.015105 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:37.015781 kubelet[2684]: E1213 13:21:37.015171 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:21:41.190482 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:40828.service - OpenSSH per-connection server daemon (10.0.0.1:40828). Dec 13 13:21:41.230038 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 40828 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:41.231473 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:41.235133 systemd-logind[1478]: New session 16 of user core. Dec 13 13:21:41.245623 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:21:41.349208 sshd[4173]: Connection closed by 10.0.0.1 port 40828 Dec 13 13:21:41.349546 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:41.352779 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:40828.service: Deactivated successfully. Dec 13 13:21:41.354626 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:21:41.355176 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:21:41.356063 systemd-logind[1478]: Removed session 16. Dec 13 13:21:46.365806 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Dec 13 13:21:46.406548 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:46.408024 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:46.411982 systemd-logind[1478]: New session 17 of user core. Dec 13 13:21:46.422668 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:21:46.529421 sshd[4191]: Connection closed by 10.0.0.1 port 54036 Dec 13 13:21:46.529776 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:46.533866 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:54036.service: Deactivated successfully. Dec 13 13:21:46.536043 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:21:46.536796 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:21:46.537799 systemd-logind[1478]: Removed session 17. Dec 13 13:21:51.541551 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Dec 13 13:21:51.581149 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:51.583015 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:51.587205 systemd-logind[1478]: New session 18 of user core. Dec 13 13:21:51.598669 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:21:51.709558 sshd[4205]: Connection closed by 10.0.0.1 port 54050 Dec 13 13:21:51.709933 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:51.728247 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:54050.service: Deactivated successfully. Dec 13 13:21:51.731045 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:21:51.733656 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:21:51.743770 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:54066.service - OpenSSH per-connection server daemon (10.0.0.1:54066). Dec 13 13:21:51.744918 systemd-logind[1478]: Removed session 18. Dec 13 13:21:51.779370 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 54066 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:51.780794 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:51.784721 systemd-logind[1478]: New session 19 of user core. Dec 13 13:21:51.791644 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:21:52.463189 sshd[4219]: Connection closed by 10.0.0.1 port 54066 Dec 13 13:21:52.463751 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:52.477469 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:54066.service: Deactivated successfully. Dec 13 13:21:52.479386 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:21:52.480818 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:21:52.489725 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:54078.service - OpenSSH per-connection server daemon (10.0.0.1:54078). Dec 13 13:21:52.490731 systemd-logind[1478]: Removed session 19. Dec 13 13:21:52.532537 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 54078 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:52.533940 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:52.538086 systemd-logind[1478]: New session 20 of user core. Dec 13 13:21:52.548616 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:21:55.517812 sshd[4234]: Connection closed by 10.0.0.1 port 54078 Dec 13 13:21:55.518237 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:55.527602 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:54078.service: Deactivated successfully. Dec 13 13:21:55.529450 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:21:55.531214 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:21:55.540847 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:54088.service - OpenSSH per-connection server daemon (10.0.0.1:54088). Dec 13 13:21:55.543188 systemd-logind[1478]: Removed session 20. Dec 13 13:21:55.579061 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 54088 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:55.580585 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:55.584284 systemd-logind[1478]: New session 21 of user core. Dec 13 13:21:55.596677 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:21:55.816280 sshd[4255]: Connection closed by 10.0.0.1 port 54088 Dec 13 13:21:55.816585 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:55.827336 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:54088.service: Deactivated successfully. Dec 13 13:21:55.829591 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:21:55.831224 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:21:55.832736 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:54104.service - OpenSSH per-connection server daemon (10.0.0.1:54104). Dec 13 13:21:55.834418 systemd-logind[1478]: Removed session 21. Dec 13 13:21:55.881740 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 54104 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:21:55.883759 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:21:55.887943 systemd-logind[1478]: New session 22 of user core. Dec 13 13:21:55.903751 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:21:56.013999 sshd[4267]: Connection closed by 10.0.0.1 port 54104 Dec 13 13:21:56.014446 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Dec 13 13:21:56.019014 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:54104.service: Deactivated successfully. Dec 13 13:21:56.021723 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:21:56.022406 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:21:56.023286 systemd-logind[1478]: Removed session 22. Dec 13 13:22:00.820034 kubelet[2684]: E1213 13:22:00.819984 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:01.031625 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:37920.service - OpenSSH per-connection server daemon (10.0.0.1:37920). Dec 13 13:22:01.071420 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 37920 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:01.072866 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:01.076687 systemd-logind[1478]: New session 23 of user core. Dec 13 13:22:01.086745 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:22:01.192220 sshd[4281]: Connection closed by 10.0.0.1 port 37920 Dec 13 13:22:01.192596 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:01.196363 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:37920.service: Deactivated successfully. Dec 13 13:22:01.198224 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:22:01.198822 systemd-logind[1478]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:22:01.199571 systemd-logind[1478]: Removed session 23. Dec 13 13:22:02.820600 kubelet[2684]: E1213 13:22:02.820548 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:06.204210 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:52974.service - OpenSSH per-connection server daemon (10.0.0.1:52974). Dec 13 13:22:06.244069 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 52974 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:06.245714 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:06.249894 systemd-logind[1478]: New session 24 of user core. Dec 13 13:22:06.259746 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:22:06.381742 sshd[4301]: Connection closed by 10.0.0.1 port 52974 Dec 13 13:22:06.382469 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:06.386938 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:52974.service: Deactivated successfully. Dec 13 13:22:06.389088 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:22:06.389818 systemd-logind[1478]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:22:06.390633 systemd-logind[1478]: Removed session 24. Dec 13 13:22:11.398321 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Dec 13 13:22:11.438882 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:11.440287 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:11.444453 systemd-logind[1478]: New session 25 of user core. Dec 13 13:22:11.458746 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:22:11.565646 sshd[4316]: Connection closed by 10.0.0.1 port 52988 Dec 13 13:22:11.566023 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:11.570444 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:52988.service: Deactivated successfully. Dec 13 13:22:11.572809 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:22:11.573545 systemd-logind[1478]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:22:11.574489 systemd-logind[1478]: Removed session 25. Dec 13 13:22:16.578856 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:38934.service - OpenSSH per-connection server daemon (10.0.0.1:38934). Dec 13 13:22:16.619807 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 38934 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:16.621346 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:16.625808 systemd-logind[1478]: New session 26 of user core. Dec 13 13:22:16.637822 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:22:16.750272 sshd[4331]: Connection closed by 10.0.0.1 port 38934 Dec 13 13:22:16.750705 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:16.760128 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:38934.service: Deactivated successfully. Dec 13 13:22:16.761870 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:22:16.763293 systemd-logind[1478]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:22:16.771006 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:38938.service - OpenSSH per-connection server daemon (10.0.0.1:38938). Dec 13 13:22:16.772046 systemd-logind[1478]: Removed session 26. Dec 13 13:22:16.808211 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 38938 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:16.810078 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:16.814580 systemd-logind[1478]: New session 27 of user core. Dec 13 13:22:16.826741 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:22:18.380970 systemd[1]: run-containerd-runc-k8s.io-ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad-runc.DhTIIR.mount: Deactivated successfully. Dec 13 13:22:18.391064 containerd[1500]: time="2024-12-13T13:22:18.391015824Z" level=info msg="StopContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" with timeout 30 (s)" Dec 13 13:22:18.401004 containerd[1500]: time="2024-12-13T13:22:18.400964276Z" level=info msg="Stop container \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" with signal terminated" Dec 13 13:22:18.410078 containerd[1500]: time="2024-12-13T13:22:18.410029370Z" level=info msg="StopContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" with timeout 2 (s)" Dec 13 13:22:18.413275 containerd[1500]: time="2024-12-13T13:22:18.410366390Z" level=info msg="Stop container \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" with signal terminated" Dec 13 13:22:18.413275 containerd[1500]: time="2024-12-13T13:22:18.411204583Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:22:18.415102 systemd[1]: cri-containerd-dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d.scope: Deactivated successfully. Dec 13 13:22:18.419149 systemd-networkd[1410]: lxc_health: Link DOWN Dec 13 13:22:18.419156 systemd-networkd[1410]: lxc_health: Lost carrier Dec 13 13:22:18.440558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d-rootfs.mount: Deactivated successfully. Dec 13 13:22:18.447199 systemd[1]: cri-containerd-ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad.scope: Deactivated successfully. Dec 13 13:22:18.447747 systemd[1]: cri-containerd-ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad.scope: Consumed 6.667s CPU time. Dec 13 13:22:18.450485 containerd[1500]: time="2024-12-13T13:22:18.450416235Z" level=info msg="shim disconnected" id=dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d namespace=k8s.io Dec 13 13:22:18.450485 containerd[1500]: time="2024-12-13T13:22:18.450475057Z" level=warning msg="cleaning up after shim disconnected" id=dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d namespace=k8s.io Dec 13 13:22:18.450485 containerd[1500]: time="2024-12-13T13:22:18.450485096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:18.471342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad-rootfs.mount: Deactivated successfully. Dec 13 13:22:18.471770 containerd[1500]: time="2024-12-13T13:22:18.471728739Z" level=info msg="StopContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" returns successfully" Dec 13 13:22:18.476306 containerd[1500]: time="2024-12-13T13:22:18.476254698Z" level=info msg="StopPodSandbox for \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\"" Dec 13 13:22:18.477867 containerd[1500]: time="2024-12-13T13:22:18.477808390Z" level=info msg="shim disconnected" id=ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad namespace=k8s.io Dec 13 13:22:18.477920 containerd[1500]: time="2024-12-13T13:22:18.477866821Z" level=warning msg="cleaning up after shim disconnected" id=ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad namespace=k8s.io Dec 13 13:22:18.477920 containerd[1500]: time="2024-12-13T13:22:18.477875407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:18.487710 containerd[1500]: time="2024-12-13T13:22:18.476300886Z" level=info msg="Container to stop \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.490185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f-shm.mount: Deactivated successfully. Dec 13 13:22:18.495330 containerd[1500]: time="2024-12-13T13:22:18.495286639Z" level=info msg="StopContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" returns successfully" Dec 13 13:22:18.495981 containerd[1500]: time="2024-12-13T13:22:18.495909812Z" level=info msg="StopPodSandbox for \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\"" Dec 13 13:22:18.495981 containerd[1500]: time="2024-12-13T13:22:18.495970138Z" level=info msg="Container to stop \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.496052 containerd[1500]: time="2024-12-13T13:22:18.495985767Z" level=info msg="Container to stop \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.496052 containerd[1500]: time="2024-12-13T13:22:18.495999723Z" level=info msg="Container to stop \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.496052 containerd[1500]: time="2024-12-13T13:22:18.496011897Z" level=info msg="Container to stop \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.496052 containerd[1500]: time="2024-12-13T13:22:18.496022427Z" level=info msg="Container to stop \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:22:18.496438 systemd[1]: cri-containerd-104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f.scope: Deactivated successfully. Dec 13 13:22:18.517913 systemd[1]: cri-containerd-3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a.scope: Deactivated successfully. Dec 13 13:22:18.576208 containerd[1500]: time="2024-12-13T13:22:18.576108259Z" level=info msg="shim disconnected" id=104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f namespace=k8s.io Dec 13 13:22:18.576208 containerd[1500]: time="2024-12-13T13:22:18.576168715Z" level=warning msg="cleaning up after shim disconnected" id=104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f namespace=k8s.io Dec 13 13:22:18.576208 containerd[1500]: time="2024-12-13T13:22:18.576179735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:18.576619 containerd[1500]: time="2024-12-13T13:22:18.576344148Z" level=info msg="shim disconnected" id=3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a namespace=k8s.io Dec 13 13:22:18.576619 containerd[1500]: time="2024-12-13T13:22:18.576396086Z" level=warning msg="cleaning up after shim disconnected" id=3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a namespace=k8s.io Dec 13 13:22:18.576619 containerd[1500]: time="2024-12-13T13:22:18.576403972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:18.590227 containerd[1500]: time="2024-12-13T13:22:18.590145180Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:22:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:22:18.591600 containerd[1500]: time="2024-12-13T13:22:18.591553866Z" level=info msg="TearDown network for sandbox \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" successfully" Dec 13 13:22:18.595064 containerd[1500]: time="2024-12-13T13:22:18.591600474Z" level=info msg="StopPodSandbox for \"3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a\" returns successfully" Dec 13 13:22:18.595192 containerd[1500]: time="2024-12-13T13:22:18.591650469Z" level=info msg="TearDown network for sandbox \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\" successfully" Dec 13 13:22:18.595192 containerd[1500]: time="2024-12-13T13:22:18.595182450Z" level=info msg="StopPodSandbox for \"104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f\" returns successfully" Dec 13 13:22:18.620391 kubelet[2684]: I1213 13:22:18.620337 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-run\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.620391 kubelet[2684]: I1213 13:22:18.620395 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-cilium-config-path\") pod \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\" (UID: \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\") " Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620422 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s2lc\" (UniqueName: \"kubernetes.io/projected/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-kube-api-access-9s2lc\") pod \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\" (UID: \"c4784b1c-6dbd-4df5-b83f-51e119f0a2b5\") " Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620442 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-bpf-maps\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620462 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-xtables-lock\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620470 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620501 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.620914 kubelet[2684]: I1213 13:22:18.620483 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-cgroup\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621063 kubelet[2684]: I1213 13:22:18.620554 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.621063 kubelet[2684]: I1213 13:22:18.620565 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-net\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621063 kubelet[2684]: I1213 13:22:18.620592 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.621063 kubelet[2684]: I1213 13:22:18.620619 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-etc-cni-netd\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621063 kubelet[2684]: I1213 13:22:18.620618 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620645 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rnc4\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620662 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-lib-modules\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620679 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-kernel\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620702 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/837f3459-b455-4ddf-a7db-4c5ec4e40f22-clustermesh-secrets\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620719 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cni-path\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621181 kubelet[2684]: I1213 13:22:18.620736 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hubble-tls\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620754 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-config-path\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620770 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hostproc\") pod \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\" (UID: \"837f3459-b455-4ddf-a7db-4c5ec4e40f22\") " Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620797 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620809 2684 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620818 2684 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620827 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.621334 kubelet[2684]: I1213 13:22:18.620867 2684 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.621484 kubelet[2684]: I1213 13:22:18.620884 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hostproc" (OuterVolumeSpecName: "hostproc") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.621484 kubelet[2684]: I1213 13:22:18.620902 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.621484 kubelet[2684]: I1213 13:22:18.620916 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.624023 kubelet[2684]: I1213 13:22:18.623782 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837f3459-b455-4ddf-a7db-4c5ec4e40f22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:22:18.624023 kubelet[2684]: I1213 13:22:18.623820 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cni-path" (OuterVolumeSpecName: "cni-path") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.624171 kubelet[2684]: I1213 13:22:18.624081 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-kube-api-access-9s2lc" (OuterVolumeSpecName: "kube-api-access-9s2lc") pod "c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" (UID: "c4784b1c-6dbd-4df5-b83f-51e119f0a2b5"). InnerVolumeSpecName "kube-api-access-9s2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:22:18.624239 kubelet[2684]: I1213 13:22:18.624215 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:22:18.624886 kubelet[2684]: I1213 13:22:18.624839 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4" (OuterVolumeSpecName: "kube-api-access-4rnc4") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "kube-api-access-4rnc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:22:18.626561 kubelet[2684]: I1213 13:22:18.626504 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:22:18.627628 kubelet[2684]: I1213 13:22:18.627601 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" (UID: "c4784b1c-6dbd-4df5-b83f-51e119f0a2b5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:22:18.629767 kubelet[2684]: I1213 13:22:18.629736 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "837f3459-b455-4ddf-a7db-4c5ec4e40f22" (UID: "837f3459-b455-4ddf-a7db-4c5ec4e40f22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:22:18.722033 kubelet[2684]: I1213 13:22:18.721947 2684 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722033 kubelet[2684]: I1213 13:22:18.722023 2684 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4rnc4\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-kube-api-access-4rnc4\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722033 kubelet[2684]: I1213 13:22:18.722036 2684 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722033 kubelet[2684]: I1213 13:22:18.722046 2684 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722055 2684 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/837f3459-b455-4ddf-a7db-4c5ec4e40f22-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722066 2684 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722075 2684 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722084 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837f3459-b455-4ddf-a7db-4c5ec4e40f22-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722093 2684 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/837f3459-b455-4ddf-a7db-4c5ec4e40f22-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722105 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:18.722275 kubelet[2684]: I1213 13:22:18.722116 2684 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9s2lc\" (UniqueName: \"kubernetes.io/projected/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5-kube-api-access-9s2lc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:22:19.087222 kubelet[2684]: I1213 13:22:19.087117 2684 scope.go:117] "RemoveContainer" containerID="dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d" Dec 13 13:22:19.094402 systemd[1]: Removed slice kubepods-besteffort-podc4784b1c_6dbd_4df5_b83f_51e119f0a2b5.slice - libcontainer container kubepods-besteffort-podc4784b1c_6dbd_4df5_b83f_51e119f0a2b5.slice. Dec 13 13:22:19.096174 containerd[1500]: time="2024-12-13T13:22:19.096128272Z" level=info msg="RemoveContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\"" Dec 13 13:22:19.097795 systemd[1]: Removed slice kubepods-burstable-pod837f3459_b455_4ddf_a7db_4c5ec4e40f22.slice - libcontainer container kubepods-burstable-pod837f3459_b455_4ddf_a7db_4c5ec4e40f22.slice. Dec 13 13:22:19.097907 systemd[1]: kubepods-burstable-pod837f3459_b455_4ddf_a7db_4c5ec4e40f22.slice: Consumed 6.767s CPU time. Dec 13 13:22:19.230847 containerd[1500]: time="2024-12-13T13:22:19.230774613Z" level=info msg="RemoveContainer for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" returns successfully" Dec 13 13:22:19.231266 kubelet[2684]: I1213 13:22:19.231224 2684 scope.go:117] "RemoveContainer" containerID="dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d" Dec 13 13:22:19.232179 containerd[1500]: time="2024-12-13T13:22:19.231631071Z" level=error msg="ContainerStatus for \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\": not found" Dec 13 13:22:19.241729 kubelet[2684]: E1213 13:22:19.241662 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\": not found" containerID="dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d" Dec 13 13:22:19.241901 kubelet[2684]: I1213 13:22:19.241783 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d"} err="failed to get container status \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"dab7475116f22f4d898024e91e9ec90a7a04ec69cdf1c7c9a253be2316f0db1d\": not found" Dec 13 13:22:19.241901 kubelet[2684]: I1213 13:22:19.241809 2684 scope.go:117] "RemoveContainer" containerID="ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad" Dec 13 13:22:19.243431 containerd[1500]: time="2024-12-13T13:22:19.243371411Z" level=info msg="RemoveContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\"" Dec 13 13:22:19.247147 containerd[1500]: time="2024-12-13T13:22:19.247095975Z" level=info msg="RemoveContainer for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" returns successfully" Dec 13 13:22:19.247478 kubelet[2684]: I1213 13:22:19.247349 2684 scope.go:117] "RemoveContainer" containerID="593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655" Dec 13 13:22:19.248737 containerd[1500]: time="2024-12-13T13:22:19.248692267Z" level=info msg="RemoveContainer for \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\"" Dec 13 13:22:19.252650 containerd[1500]: time="2024-12-13T13:22:19.252608145Z" level=info msg="RemoveContainer for \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\" returns successfully" Dec 13 13:22:19.252829 kubelet[2684]: I1213 13:22:19.252805 2684 scope.go:117] "RemoveContainer" containerID="4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce" Dec 13 13:22:19.253772 containerd[1500]: time="2024-12-13T13:22:19.253738983Z" level=info msg="RemoveContainer for \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\"" Dec 13 13:22:19.260210 containerd[1500]: time="2024-12-13T13:22:19.260166372Z" level=info msg="RemoveContainer for \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\" returns successfully" Dec 13 13:22:19.260457 kubelet[2684]: I1213 13:22:19.260425 2684 scope.go:117] "RemoveContainer" containerID="301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e" Dec 13 13:22:19.261626 containerd[1500]: time="2024-12-13T13:22:19.261588413Z" level=info msg="RemoveContainer for \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\"" Dec 13 13:22:19.265393 containerd[1500]: time="2024-12-13T13:22:19.265319830Z" level=info msg="RemoveContainer for \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\" returns successfully" Dec 13 13:22:19.265610 kubelet[2684]: I1213 13:22:19.265561 2684 scope.go:117] "RemoveContainer" containerID="76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af" Dec 13 13:22:19.266674 containerd[1500]: time="2024-12-13T13:22:19.266591575Z" level=info msg="RemoveContainer for \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\"" Dec 13 13:22:19.270534 containerd[1500]: time="2024-12-13T13:22:19.270473730Z" level=info msg="RemoveContainer for \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\" returns successfully" Dec 13 13:22:19.270830 kubelet[2684]: I1213 13:22:19.270735 2684 scope.go:117] "RemoveContainer" containerID="ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad" Dec 13 13:22:19.270983 containerd[1500]: time="2024-12-13T13:22:19.270924856Z" level=error msg="ContainerStatus for \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\": not found" Dec 13 13:22:19.271084 kubelet[2684]: E1213 13:22:19.271066 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\": not found" containerID="ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad" Dec 13 13:22:19.271120 kubelet[2684]: I1213 13:22:19.271113 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad"} err="failed to get container status \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea867c277d96336b64646aa6837787eafda11a8f3d057ea130a329f2207491ad\": not found" Dec 13 13:22:19.271157 kubelet[2684]: I1213 13:22:19.271126 2684 scope.go:117] "RemoveContainer" containerID="593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655" Dec 13 13:22:19.271278 containerd[1500]: time="2024-12-13T13:22:19.271248862Z" level=error msg="ContainerStatus for \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\": not found" Dec 13 13:22:19.271407 kubelet[2684]: E1213 13:22:19.271372 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\": not found" containerID="593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655" Dec 13 13:22:19.271463 kubelet[2684]: I1213 13:22:19.271422 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655"} err="failed to get container status \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\": rpc error: code = NotFound desc = an error occurred when try to find container \"593b8ea69db8807b4a0a37380132e884bf44cd4225c1e543fbd89e93e8de8655\": not found" Dec 13 13:22:19.271463 kubelet[2684]: I1213 13:22:19.271439 2684 scope.go:117] "RemoveContainer" containerID="4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce" Dec 13 13:22:19.271641 containerd[1500]: time="2024-12-13T13:22:19.271619396Z" level=error msg="ContainerStatus for \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\": not found" Dec 13 13:22:19.271763 kubelet[2684]: E1213 13:22:19.271745 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\": not found" containerID="4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce" Dec 13 13:22:19.271805 kubelet[2684]: I1213 13:22:19.271770 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce"} err="failed to get container status \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d833e8dc1c6dc863e8daf9e20cd450a00f50208598caa075877462efd4b5bce\": not found" Dec 13 13:22:19.271805 kubelet[2684]: I1213 13:22:19.271782 2684 scope.go:117] "RemoveContainer" containerID="301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e" Dec 13 13:22:19.271925 containerd[1500]: time="2024-12-13T13:22:19.271900340Z" level=error msg="ContainerStatus for \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\": not found" Dec 13 13:22:19.272033 kubelet[2684]: E1213 13:22:19.271998 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\": not found" containerID="301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e" Dec 13 13:22:19.272095 kubelet[2684]: I1213 13:22:19.272033 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e"} err="failed to get container status \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\": rpc error: code = NotFound desc = an error occurred when try to find container \"301702bceb36bfe939a7b7907fcf8e1e1c42a374736d4f49a17a2b17f2c4636e\": not found" Dec 13 13:22:19.272095 kubelet[2684]: I1213 13:22:19.272050 2684 scope.go:117] "RemoveContainer" containerID="76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af" Dec 13 13:22:19.272192 containerd[1500]: time="2024-12-13T13:22:19.272170202Z" level=error msg="ContainerStatus for \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\": not found" Dec 13 13:22:19.272288 kubelet[2684]: E1213 13:22:19.272270 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\": not found" containerID="76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af" Dec 13 13:22:19.272352 kubelet[2684]: I1213 13:22:19.272293 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af"} err="failed to get container status \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\": rpc error: code = NotFound desc = an error occurred when try to find container \"76238b422bfe27e2387f476b52140cd535c826a3eb2dc2837fcef5b8a7d125af\": not found" Dec 13 13:22:19.376235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a-rootfs.mount: Deactivated successfully. Dec 13 13:22:19.376352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3485e1854de0c6b9e6a2234e995efed1a46fb7949d6e6cdb99ea6a8d01a4b20a-shm.mount: Deactivated successfully. Dec 13 13:22:19.376434 systemd[1]: var-lib-kubelet-pods-837f3459\x2db455\x2d4ddf\x2da7db\x2d4c5ec4e40f22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4rnc4.mount: Deactivated successfully. Dec 13 13:22:19.376534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-104f1e33bde7ecb09bb01c8438a12fe20c44ff1738fb173d98abd4a1a34ec50f-rootfs.mount: Deactivated successfully. Dec 13 13:22:19.376617 systemd[1]: var-lib-kubelet-pods-c4784b1c\x2d6dbd\x2d4df5\x2db83f\x2d51e119f0a2b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9s2lc.mount: Deactivated successfully. Dec 13 13:22:19.376692 systemd[1]: var-lib-kubelet-pods-837f3459\x2db455\x2d4ddf\x2da7db\x2d4c5ec4e40f22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:22:19.376770 systemd[1]: var-lib-kubelet-pods-837f3459\x2db455\x2d4ddf\x2da7db\x2d4c5ec4e40f22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:22:19.826060 kubelet[2684]: I1213 13:22:19.826014 2684 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" path="/var/lib/kubelet/pods/837f3459-b455-4ddf-a7db-4c5ec4e40f22/volumes" Dec 13 13:22:19.826922 kubelet[2684]: I1213 13:22:19.826897 2684 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" path="/var/lib/kubelet/pods/c4784b1c-6dbd-4df5-b83f-51e119f0a2b5/volumes" Dec 13 13:22:20.324988 sshd[4345]: Connection closed by 10.0.0.1 port 38938 Dec 13 13:22:20.325605 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:20.339067 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:38938.service: Deactivated successfully. Dec 13 13:22:20.341022 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:22:20.342747 systemd-logind[1478]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:22:20.353877 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:38946.service - OpenSSH per-connection server daemon (10.0.0.1:38946). Dec 13 13:22:20.355112 systemd-logind[1478]: Removed session 27. Dec 13 13:22:20.397786 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 38946 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:20.399388 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:20.404456 systemd-logind[1478]: New session 28 of user core. Dec 13 13:22:20.419689 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:22:20.820210 kubelet[2684]: E1213 13:22:20.820165 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:21.014751 sshd[4509]: Connection closed by 10.0.0.1 port 38946 Dec 13 13:22:21.015481 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:21.030363 kubelet[2684]: I1213 13:22:21.029764 2684 topology_manager.go:215] "Topology Admit Handler" podUID="a281b80b-e07d-4f13-a268-0a4e13e645aa" podNamespace="kube-system" podName="cilium-7wxd2" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031638 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="clean-cilium-state" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031672 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="cilium-agent" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031681 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" containerName="cilium-operator" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031688 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="apply-sysctl-overwrites" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031695 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="mount-bpf-fs" Dec 13 13:22:21.033576 kubelet[2684]: E1213 13:22:21.031701 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="mount-cgroup" Dec 13 13:22:21.033576 kubelet[2684]: I1213 13:22:21.031741 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4784b1c-6dbd-4df5-b83f-51e119f0a2b5" containerName="cilium-operator" Dec 13 13:22:21.033576 kubelet[2684]: I1213 13:22:21.031751 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f3459-b455-4ddf-a7db-4c5ec4e40f22" containerName="cilium-agent" Dec 13 13:22:21.031547 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:38946.service: Deactivated successfully. Dec 13 13:22:21.038205 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:22:21.041079 systemd-logind[1478]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:22:21.043739 systemd-logind[1478]: Removed session 28. Dec 13 13:22:21.054031 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Dec 13 13:22:21.063078 systemd[1]: Created slice kubepods-burstable-poda281b80b_e07d_4f13_a268_0a4e13e645aa.slice - libcontainer container kubepods-burstable-poda281b80b_e07d_4f13_a268_0a4e13e645aa.slice. Dec 13 13:22:21.090979 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:21.092884 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:21.097862 systemd-logind[1478]: New session 29 of user core. Dec 13 13:22:21.112755 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:22:21.135813 kubelet[2684]: I1213 13:22:21.135747 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-cilium-cgroup\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.135813 kubelet[2684]: I1213 13:22:21.135813 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a281b80b-e07d-4f13-a268-0a4e13e645aa-clustermesh-secrets\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136014 kubelet[2684]: I1213 13:22:21.135842 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a281b80b-e07d-4f13-a268-0a4e13e645aa-cilium-config-path\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136014 kubelet[2684]: I1213 13:22:21.135891 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a281b80b-e07d-4f13-a268-0a4e13e645aa-hubble-tls\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136014 kubelet[2684]: I1213 13:22:21.135919 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-cilium-run\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136088 kubelet[2684]: I1213 13:22:21.136017 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-host-proc-sys-kernel\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136228 kubelet[2684]: I1213 13:22:21.136161 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-lib-modules\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136228 kubelet[2684]: I1213 13:22:21.136226 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-xtables-lock\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136323 kubelet[2684]: I1213 13:22:21.136249 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-hostproc\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136323 kubelet[2684]: I1213 13:22:21.136267 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-etc-cni-netd\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136323 kubelet[2684]: I1213 13:22:21.136289 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-host-proc-sys-net\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136408 kubelet[2684]: I1213 13:22:21.136350 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-cni-path\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136408 kubelet[2684]: I1213 13:22:21.136379 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a281b80b-e07d-4f13-a268-0a4e13e645aa-bpf-maps\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136408 kubelet[2684]: I1213 13:22:21.136399 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a281b80b-e07d-4f13-a268-0a4e13e645aa-cilium-ipsec-secrets\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.136493 kubelet[2684]: I1213 13:22:21.136432 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9cp\" (UniqueName: \"kubernetes.io/projected/a281b80b-e07d-4f13-a268-0a4e13e645aa-kube-api-access-tt9cp\") pod \"cilium-7wxd2\" (UID: \"a281b80b-e07d-4f13-a268-0a4e13e645aa\") " pod="kube-system/cilium-7wxd2" Dec 13 13:22:21.163110 sshd[4524]: Connection closed by 10.0.0.1 port 38950 Dec 13 13:22:21.163474 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:21.174991 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:38950.service: Deactivated successfully. Dec 13 13:22:21.176985 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:22:21.178467 systemd-logind[1478]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:22:21.184851 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:38958.service - OpenSSH per-connection server daemon (10.0.0.1:38958). Dec 13 13:22:21.186068 systemd-logind[1478]: Removed session 29. Dec 13 13:22:21.222061 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 38958 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:22:21.223738 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:22:21.227849 systemd-logind[1478]: New session 30 of user core. Dec 13 13:22:21.235642 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 13:22:21.366937 kubelet[2684]: E1213 13:22:21.366360 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:21.367788 containerd[1500]: time="2024-12-13T13:22:21.367748169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wxd2,Uid:a281b80b-e07d-4f13-a268-0a4e13e645aa,Namespace:kube-system,Attempt:0,}" Dec 13 13:22:21.623457 containerd[1500]: time="2024-12-13T13:22:21.623293563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:22:21.623457 containerd[1500]: time="2024-12-13T13:22:21.623354809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:22:21.623457 containerd[1500]: time="2024-12-13T13:22:21.623368274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:22:21.623611 containerd[1500]: time="2024-12-13T13:22:21.623445671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:22:21.645654 systemd[1]: Started cri-containerd-30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d.scope - libcontainer container 30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d. Dec 13 13:22:21.666186 containerd[1500]: time="2024-12-13T13:22:21.666148496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wxd2,Uid:a281b80b-e07d-4f13-a268-0a4e13e645aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\"" Dec 13 13:22:21.666793 kubelet[2684]: E1213 13:22:21.666776 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:21.668454 containerd[1500]: time="2024-12-13T13:22:21.668429656Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:22:21.687840 containerd[1500]: time="2024-12-13T13:22:21.687777986Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632\"" Dec 13 13:22:21.688345 containerd[1500]: time="2024-12-13T13:22:21.688319384Z" level=info msg="StartContainer for \"c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632\"" Dec 13 13:22:21.719690 systemd[1]: Started cri-containerd-c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632.scope - libcontainer container c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632. Dec 13 13:22:21.745161 containerd[1500]: time="2024-12-13T13:22:21.745116765Z" level=info msg="StartContainer for \"c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632\" returns successfully" Dec 13 13:22:21.752233 systemd[1]: cri-containerd-c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632.scope: Deactivated successfully. Dec 13 13:22:21.784329 containerd[1500]: time="2024-12-13T13:22:21.784260673Z" level=info msg="shim disconnected" id=c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632 namespace=k8s.io Dec 13 13:22:21.784329 containerd[1500]: time="2024-12-13T13:22:21.784318574Z" level=warning msg="cleaning up after shim disconnected" id=c04e5dba21b9ab5d1f609e6247d422d0c23e5f6a4159cb4561af4fdf77e00632 namespace=k8s.io Dec 13 13:22:21.784329 containerd[1500]: time="2024-12-13T13:22:21.784326759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:21.876883 kubelet[2684]: E1213 13:22:21.876751 2684 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:22:22.098409 kubelet[2684]: E1213 13:22:22.098376 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:22.101162 containerd[1500]: time="2024-12-13T13:22:22.101105233Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:22:22.380331 containerd[1500]: time="2024-12-13T13:22:22.380268607Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2\"" Dec 13 13:22:22.380872 containerd[1500]: time="2024-12-13T13:22:22.380840813Z" level=info msg="StartContainer for \"1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2\"" Dec 13 13:22:22.413672 systemd[1]: Started cri-containerd-1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2.scope - libcontainer container 1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2. Dec 13 13:22:22.444333 systemd[1]: cri-containerd-1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2.scope: Deactivated successfully. Dec 13 13:22:22.660857 containerd[1500]: time="2024-12-13T13:22:22.660265654Z" level=info msg="StartContainer for \"1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2\" returns successfully" Dec 13 13:22:22.678650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2-rootfs.mount: Deactivated successfully. Dec 13 13:22:22.922148 containerd[1500]: time="2024-12-13T13:22:22.921990403Z" level=info msg="shim disconnected" id=1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2 namespace=k8s.io Dec 13 13:22:22.922148 containerd[1500]: time="2024-12-13T13:22:22.922053231Z" level=warning msg="cleaning up after shim disconnected" id=1939aa98e1e8504cea1f611881a09b5ca6524c69066118eb6a25be7d7906a2a2 namespace=k8s.io Dec 13 13:22:22.922148 containerd[1500]: time="2024-12-13T13:22:22.922063651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:23.101153 kubelet[2684]: E1213 13:22:23.101123 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:23.103102 containerd[1500]: time="2024-12-13T13:22:23.103071683Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:22:23.218745 containerd[1500]: time="2024-12-13T13:22:23.218673507Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1\"" Dec 13 13:22:23.219719 containerd[1500]: time="2024-12-13T13:22:23.219668585Z" level=info msg="StartContainer for \"833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1\"" Dec 13 13:22:23.261739 systemd[1]: Started cri-containerd-833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1.scope - libcontainer container 833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1. Dec 13 13:22:23.295313 systemd[1]: cri-containerd-833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1.scope: Deactivated successfully. Dec 13 13:22:23.295735 containerd[1500]: time="2024-12-13T13:22:23.295697888Z" level=info msg="StartContainer for \"833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1\" returns successfully" Dec 13 13:22:23.318419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1-rootfs.mount: Deactivated successfully. Dec 13 13:22:23.327575 containerd[1500]: time="2024-12-13T13:22:23.327474799Z" level=info msg="shim disconnected" id=833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1 namespace=k8s.io Dec 13 13:22:23.327686 containerd[1500]: time="2024-12-13T13:22:23.327581371Z" level=warning msg="cleaning up after shim disconnected" id=833f314b8149380d4e50a20543565cd4af31a99a73219d956f99dffd1631e0b1 namespace=k8s.io Dec 13 13:22:23.327686 containerd[1500]: time="2024-12-13T13:22:23.327594606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:23.820161 kubelet[2684]: E1213 13:22:23.820121 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:23.950418 kubelet[2684]: I1213 13:22:23.950384 2684 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:22:23Z","lastTransitionTime":"2024-12-13T13:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:22:24.105253 kubelet[2684]: E1213 13:22:24.105116 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:24.106910 containerd[1500]: time="2024-12-13T13:22:24.106863453Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:22:24.121755 containerd[1500]: time="2024-12-13T13:22:24.121702691Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59\"" Dec 13 13:22:24.122336 containerd[1500]: time="2024-12-13T13:22:24.122280798Z" level=info msg="StartContainer for \"4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59\"" Dec 13 13:22:24.158664 systemd[1]: Started cri-containerd-4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59.scope - libcontainer container 4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59. Dec 13 13:22:24.184138 systemd[1]: cri-containerd-4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59.scope: Deactivated successfully. Dec 13 13:22:24.186765 containerd[1500]: time="2024-12-13T13:22:24.186719197Z" level=info msg="StartContainer for \"4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59\" returns successfully" Dec 13 13:22:24.210074 containerd[1500]: time="2024-12-13T13:22:24.209999777Z" level=info msg="shim disconnected" id=4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59 namespace=k8s.io Dec 13 13:22:24.210074 containerd[1500]: time="2024-12-13T13:22:24.210069298Z" level=warning msg="cleaning up after shim disconnected" id=4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59 namespace=k8s.io Dec 13 13:22:24.210340 containerd[1500]: time="2024-12-13T13:22:24.210084968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:22:24.243457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4307f62988eb98a75d3432a32b151cbcf430e27984d51564f4b0cb8a49942a59-rootfs.mount: Deactivated successfully. Dec 13 13:22:25.109066 kubelet[2684]: E1213 13:22:25.109033 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:25.111417 containerd[1500]: time="2024-12-13T13:22:25.111363921Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:22:25.394264 containerd[1500]: time="2024-12-13T13:22:25.394125261Z" level=info msg="CreateContainer within sandbox \"30645b29396204abc62ab765013cb6c6e47aaa3cbc2bf6f927cf44c14cf99d2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfd27df2159d827d79e09f3b57c151269781e6e2561a70c27f558260ee2231f0\"" Dec 13 13:22:25.394848 containerd[1500]: time="2024-12-13T13:22:25.394814588Z" level=info msg="StartContainer for \"bfd27df2159d827d79e09f3b57c151269781e6e2561a70c27f558260ee2231f0\"" Dec 13 13:22:25.428699 systemd[1]: Started cri-containerd-bfd27df2159d827d79e09f3b57c151269781e6e2561a70c27f558260ee2231f0.scope - libcontainer container bfd27df2159d827d79e09f3b57c151269781e6e2561a70c27f558260ee2231f0. Dec 13 13:22:25.462362 containerd[1500]: time="2024-12-13T13:22:25.462315555Z" level=info msg="StartContainer for \"bfd27df2159d827d79e09f3b57c151269781e6e2561a70c27f558260ee2231f0\" returns successfully" Dec 13 13:22:25.889567 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:22:26.115878 kubelet[2684]: E1213 13:22:26.115842 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:26.131735 kubelet[2684]: I1213 13:22:26.131675 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7wxd2" podStartSLOduration=5.131633642 podStartE2EDuration="5.131633642s" podCreationTimestamp="2024-12-13 13:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:22:26.131404287 +0000 UTC m=+94.411216903" watchObservedRunningTime="2024-12-13 13:22:26.131633642 +0000 UTC m=+94.411446259" Dec 13 13:22:27.367748 kubelet[2684]: E1213 13:22:27.367716 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:28.983686 systemd-networkd[1410]: lxc_health: Link UP Dec 13 13:22:28.993817 systemd-networkd[1410]: lxc_health: Gained carrier Dec 13 13:22:29.370566 kubelet[2684]: E1213 13:22:29.370104 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:30.123284 kubelet[2684]: E1213 13:22:30.123242 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:30.263706 systemd-networkd[1410]: lxc_health: Gained IPv6LL Dec 13 13:22:31.124596 kubelet[2684]: E1213 13:22:31.124548 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:22:36.334039 sshd[4538]: Connection closed by 10.0.0.1 port 38958 Dec 13 13:22:36.334572 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Dec 13 13:22:36.338890 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:38958.service: Deactivated successfully. Dec 13 13:22:36.341215 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 13:22:36.341908 systemd-logind[1478]: Session 30 logged out. Waiting for processes to exit. Dec 13 13:22:36.342734 systemd-logind[1478]: Removed session 30.