Nov 12 20:52:58.206291 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:52:58.206323 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:52:58.206341 kernel: BIOS-provided physical RAM map: Nov 12 20:52:58.206351 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:52:58.206360 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:52:58.206369 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:52:58.206380 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 20:52:58.206390 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 20:52:58.206410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:52:58.206424 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 20:52:58.206438 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:52:58.206447 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:52:58.206456 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 20:52:58.206465 kernel: NX (Execute Disable) protection: active Nov 12 20:52:58.206493 kernel: APIC: Static calls initialized Nov 12 20:52:58.206511 kernel: SMBIOS 2.8 present. Nov 12 20:52:58.206526 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 20:52:58.206536 kernel: Hypervisor detected: KVM Nov 12 20:52:58.206545 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:52:58.206554 kernel: kvm-clock: using sched offset of 3164815948 cycles Nov 12 20:52:58.206565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:52:58.206575 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:52:58.206585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:52:58.206595 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:52:58.206610 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 20:52:58.206621 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:52:58.206632 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:52:58.206642 kernel: Using GB pages for direct mapping Nov 12 20:52:58.206653 kernel: ACPI: Early table checksum verification disabled Nov 12 20:52:58.206663 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 20:52:58.206673 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206683 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206693 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206708 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 20:52:58.206718 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206727 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206737 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206746 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:52:58.206756 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 20:52:58.206765 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 20:52:58.206785 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 20:52:58.206800 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 20:52:58.206810 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 20:52:58.206822 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 20:52:58.206833 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 20:52:58.206844 kernel: No NUMA configuration found Nov 12 20:52:58.206855 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 20:52:58.206871 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 20:52:58.206882 kernel: Zone ranges: Nov 12 20:52:58.206893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:52:58.206903 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 20:52:58.206913 kernel: Normal empty Nov 12 20:52:58.206924 kernel: Movable zone start for each node Nov 12 20:52:58.206934 kernel: Early memory node ranges Nov 12 20:52:58.206944 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:52:58.206955 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 20:52:58.206966 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 20:52:58.206987 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:52:58.206998 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:52:58.207009 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 20:52:58.207020 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:52:58.207030 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:52:58.207040 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:52:58.207051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:52:58.207061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:52:58.207071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:52:58.207086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:52:58.207097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:52:58.207108 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:52:58.207119 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:52:58.207130 kernel: TSC deadline timer available Nov 12 20:52:58.207141 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:52:58.207151 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:52:58.207162 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:52:58.207173 kernel: kvm-guest: setup PV sched yield Nov 12 20:52:58.207194 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 20:52:58.207204 kernel: Booting paravirtualized kernel on KVM Nov 12 20:52:58.207215 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:52:58.207225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:52:58.207235 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:52:58.207244 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:52:58.207254 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:52:58.207264 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:52:58.207274 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:52:58.207290 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:52:58.207302 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:52:58.207313 kernel: random: crng init done Nov 12 20:52:58.207324 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:52:58.207336 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:52:58.207347 kernel: Fallback order for Node 0: 0 Nov 12 20:52:58.207358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 20:52:58.207368 kernel: Policy zone: DMA32 Nov 12 20:52:58.207384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:52:58.207406 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 136900K reserved, 0K cma-reserved) Nov 12 20:52:58.207418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:52:58.207461 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:52:58.207550 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:52:58.207567 kernel: Dynamic Preempt: voluntary Nov 12 20:52:58.207581 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:52:58.207595 kernel: rcu: RCU event tracing is enabled. Nov 12 20:52:58.207609 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:52:58.207630 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:52:58.207651 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:52:58.207665 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:52:58.207682 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:52:58.207693 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:52:58.207704 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:52:58.207716 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:52:58.207726 kernel: Console: colour VGA+ 80x25 Nov 12 20:52:58.207736 kernel: printk: console [ttyS0] enabled Nov 12 20:52:58.207753 kernel: ACPI: Core revision 20230628 Nov 12 20:52:58.207765 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:52:58.207776 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:52:58.207786 kernel: x2apic enabled Nov 12 20:52:58.207797 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:52:58.207808 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:52:58.207818 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:52:58.207828 kernel: kvm-guest: setup PV IPIs Nov 12 20:52:58.207853 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:52:58.207863 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:52:58.207874 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:52:58.207885 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:52:58.207900 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:52:58.207912 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:52:58.207923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:52:58.207935 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:52:58.207947 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:52:58.207963 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:52:58.207975 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:52:58.207986 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:52:58.208003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:52:58.208015 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:52:58.208026 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:52:58.208037 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:52:58.208048 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:52:58.208065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:52:58.208077 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:52:58.208088 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:52:58.208100 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:52:58.208111 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:52:58.208122 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:52:58.208133 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:52:58.208143 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:52:58.208154 kernel: landlock: Up and running. Nov 12 20:52:58.208169 kernel: SELinux: Initializing. Nov 12 20:52:58.208180 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:52:58.208191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:52:58.208203 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:52:58.208214 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:52:58.208224 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:52:58.208240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:52:58.208252 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:52:58.208268 kernel: ... version: 0 Nov 12 20:52:58.208280 kernel: ... bit width: 48 Nov 12 20:52:58.208291 kernel: ... generic registers: 6 Nov 12 20:52:58.208302 kernel: ... value mask: 0000ffffffffffff Nov 12 20:52:58.208313 kernel: ... max period: 00007fffffffffff Nov 12 20:52:58.208324 kernel: ... fixed-purpose events: 0 Nov 12 20:52:58.208334 kernel: ... event mask: 000000000000003f Nov 12 20:52:58.208345 kernel: signal: max sigframe size: 1776 Nov 12 20:52:58.208355 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:52:58.208366 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:52:58.208383 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:52:58.208406 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:52:58.208419 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:52:58.208432 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:52:58.208445 kernel: smpboot: Max logical packages: 1 Nov 12 20:52:58.208458 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:52:58.208470 kernel: devtmpfs: initialized Nov 12 20:52:58.208502 kernel: x86/mm: Memory block size: 128MB Nov 12 20:52:58.208516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:52:58.208553 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:52:58.208576 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:52:58.208587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:52:58.208598 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:52:58.208608 kernel: audit: type=2000 audit(1731444776.601:1): state=initialized audit_enabled=0 res=1 Nov 12 20:52:58.208619 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:52:58.208629 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:52:58.208641 kernel: cpuidle: using governor menu Nov 12 20:52:58.208660 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:52:58.208676 kernel: dca service started, version 1.12.1 Nov 12 20:52:58.208688 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:52:58.208698 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:52:58.208710 kernel: PCI: Using configuration type 1 for base access Nov 12 20:52:58.208722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:52:58.208732 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:52:58.208743 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:52:58.208755 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:52:58.208767 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:52:58.208783 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:52:58.208795 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:52:58.208807 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:52:58.208819 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:52:58.208830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:52:58.208842 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:52:58.208853 kernel: ACPI: Interpreter enabled Nov 12 20:52:58.208864 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:52:58.208875 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:52:58.208891 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:52:58.208902 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:52:58.208914 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:52:58.208925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:52:58.209221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:52:58.209425 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:52:58.209655 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:52:58.209682 kernel: PCI host bridge to bus 0000:00 Nov 12 20:52:58.209885 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:52:58.210060 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:52:58.210222 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:52:58.210404 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:52:58.210637 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:52:58.210814 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 20:52:58.210989 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:52:58.211222 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:52:58.211473 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:52:58.211687 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 20:52:58.211868 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 20:52:58.212053 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 20:52:58.212235 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:52:58.212501 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:52:58.212694 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 20:52:58.212881 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 20:52:58.213068 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 20:52:58.213268 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:52:58.213470 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:52:58.213734 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 20:52:58.213921 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 20:52:58.214145 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:52:58.214326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 20:52:58.214554 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 20:52:58.214712 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 20:52:58.214891 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 20:52:58.215105 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:52:58.215289 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:52:58.215535 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:52:58.215723 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 20:52:58.215893 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 20:52:58.216045 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:52:58.216172 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 20:52:58.216189 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:52:58.216198 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:52:58.216206 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:52:58.216216 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:52:58.216227 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:52:58.216237 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:52:58.216248 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:52:58.216260 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:52:58.216276 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:52:58.216288 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:52:58.216300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:52:58.216311 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:52:58.216323 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:52:58.216334 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:52:58.216346 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:52:58.216358 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:52:58.216369 kernel: iommu: Default domain type: Translated Nov 12 20:52:58.216380 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:52:58.216409 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:52:58.216421 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:52:58.216433 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:52:58.216444 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 20:52:58.216693 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:52:58.216879 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:52:58.217014 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:52:58.217028 kernel: vgaarb: loaded Nov 12 20:52:58.217047 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:52:58.217058 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:52:58.217069 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:52:58.217081 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:52:58.217093 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:52:58.217104 kernel: pnp: PnP ACPI init Nov 12 20:52:58.217341 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:52:58.217362 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:52:58.217381 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:52:58.217404 kernel: NET: Registered PF_INET protocol family Nov 12 20:52:58.217417 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:52:58.217429 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:52:58.217440 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:52:58.217451 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:52:58.217463 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:52:58.217496 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:52:58.217506 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:52:58.217522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:52:58.217532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:52:58.217543 kernel: NET: Registered PF_XDP protocol family Nov 12 20:52:58.217709 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:52:58.217876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:52:58.218042 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:52:58.218202 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:52:58.218334 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:52:58.218560 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 20:52:58.218579 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:52:58.218591 kernel: Initialise system trusted keyrings Nov 12 20:52:58.218602 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:52:58.218613 kernel: Key type asymmetric registered Nov 12 20:52:58.218625 kernel: Asymmetric key parser 'x509' registered Nov 12 20:52:58.218636 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:52:58.218647 kernel: io scheduler mq-deadline registered Nov 12 20:52:58.218658 kernel: io scheduler kyber registered Nov 12 20:52:58.218676 kernel: io scheduler bfq registered Nov 12 20:52:58.218687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:52:58.218699 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:52:58.218710 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:52:58.218721 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:52:58.218731 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:52:58.218742 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:52:58.218753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:52:58.218764 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:52:58.218779 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:52:58.218979 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:52:58.218994 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:52:58.219114 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:52:58.219286 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:52:57 UTC (1731444777) Nov 12 20:52:58.219469 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:52:58.219514 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:52:58.219527 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:52:58.219544 kernel: Segment Routing with IPv6 Nov 12 20:52:58.219556 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:52:58.219566 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:52:58.219578 kernel: Key type dns_resolver registered Nov 12 20:52:58.219588 kernel: IPI shorthand broadcast: enabled Nov 12 20:52:58.219599 kernel: sched_clock: Marking stable (805003714, 241207241)->(1088490433, -42279478) Nov 12 20:52:58.219608 kernel: registered taskstats version 1 Nov 12 20:52:58.219616 kernel: Loading compiled-in X.509 certificates Nov 12 20:52:58.219624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:52:58.219636 kernel: Key type .fscrypt registered Nov 12 20:52:58.219644 kernel: Key type fscrypt-provisioning registered Nov 12 20:52:58.219651 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:52:58.219659 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:52:58.219667 kernel: ima: No architecture policies found Nov 12 20:52:58.219675 kernel: clk: Disabling unused clocks Nov 12 20:52:58.219683 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:52:58.219692 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:52:58.219703 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:52:58.219717 kernel: Run /init as init process Nov 12 20:52:58.219727 kernel: with arguments: Nov 12 20:52:58.219738 kernel: /init Nov 12 20:52:58.219748 kernel: with environment: Nov 12 20:52:58.219759 kernel: HOME=/ Nov 12 20:52:58.219769 kernel: TERM=linux Nov 12 20:52:58.219778 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:52:58.219788 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:52:58.219801 systemd[1]: Detected virtualization kvm. Nov 12 20:52:58.219810 systemd[1]: Detected architecture x86-64. Nov 12 20:52:58.219818 systemd[1]: Running in initrd. Nov 12 20:52:58.219826 systemd[1]: No hostname configured, using default hostname. Nov 12 20:52:58.219834 systemd[1]: Hostname set to . Nov 12 20:52:58.219842 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:52:58.219850 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:52:58.219859 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:52:58.219870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:52:58.219879 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:52:58.219899 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:52:58.219911 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:52:58.219920 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:52:58.219937 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:52:58.219949 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:52:58.219962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:52:58.219975 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:52:58.219988 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:52:58.220000 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:52:58.220013 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:52:58.220025 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:52:58.220040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:52:58.220052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:52:58.220063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:52:58.220075 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:52:58.220088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:52:58.220102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:52:58.220115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:52:58.220127 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:52:58.220139 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:52:58.220155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:52:58.220167 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:52:58.220179 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:52:58.220190 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:52:58.220201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:52:58.220209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:58.220218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:52:58.220226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:52:58.220238 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:52:58.220247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:52:58.220281 systemd-journald[193]: Collecting audit messages is disabled. Nov 12 20:52:58.220303 systemd-journald[193]: Journal started Nov 12 20:52:58.220324 systemd-journald[193]: Runtime Journal (/run/log/journal/9a7908eff68945acbedb9425cbc4678f) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:52:58.206147 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:52:58.243080 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:52:58.243117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:52:58.243136 kernel: Bridge firewalling registered Nov 12 20:52:58.237257 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:52:58.242774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:52:58.245652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:58.260776 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:52:58.262557 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:52:58.270073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:52:58.280050 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:52:58.291690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:52:58.292158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:52:58.292765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:52:58.294703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:52:58.299922 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:52:58.303497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:52:58.305666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:52:58.322833 dracut-cmdline[228]: dracut-dracut-053 Nov 12 20:52:58.326822 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:52:58.342037 systemd-resolved[229]: Positive Trust Anchors: Nov 12 20:52:58.342056 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:52:58.342098 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:52:58.345433 systemd-resolved[229]: Defaulting to hostname 'linux'. Nov 12 20:52:58.346997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:52:58.353240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:52:58.419524 kernel: SCSI subsystem initialized Nov 12 20:52:58.430512 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:52:58.441505 kernel: iscsi: registered transport (tcp) Nov 12 20:52:58.463511 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:52:58.463560 kernel: QLogic iSCSI HBA Driver Nov 12 20:52:58.516371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:52:58.535682 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:52:58.561806 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:52:58.561879 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:52:58.562896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:52:58.631518 kernel: raid6: avx2x4 gen() 27811 MB/s Nov 12 20:52:58.648503 kernel: raid6: avx2x2 gen() 28256 MB/s Nov 12 20:52:58.665592 kernel: raid6: avx2x1 gen() 25914 MB/s Nov 12 20:52:58.665611 kernel: raid6: using algorithm avx2x2 gen() 28256 MB/s Nov 12 20:52:58.683604 kernel: raid6: .... xor() 19907 MB/s, rmw enabled Nov 12 20:52:58.683646 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:52:58.705522 kernel: xor: automatically using best checksumming function avx Nov 12 20:52:58.870521 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:52:58.885939 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:52:58.920676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:52:58.936252 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 20:52:58.941243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:52:58.958791 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:52:58.977677 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 12 20:52:59.024587 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:52:59.042692 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:52:59.118335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:52:59.130146 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:52:59.147631 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:52:59.150970 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:52:59.151109 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:52:59.151515 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:52:59.161414 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:52:59.170500 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:52:59.195142 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:52:59.195366 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:52:59.195393 kernel: GPT:9289727 != 19775487 Nov 12 20:52:59.195408 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:52:59.195423 kernel: GPT:9289727 != 19775487 Nov 12 20:52:59.195436 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:52:59.195450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:52:59.195464 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:52:59.172077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:52:59.196794 kernel: libata version 3.00 loaded. Nov 12 20:52:59.201820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:52:59.201990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:52:59.206707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:52:59.216439 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:52:59.216468 kernel: AES CTR mode by8 optimization enabled Nov 12 20:52:59.212073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:52:59.212291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:59.212421 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:59.221865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:59.234568 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (463) Nov 12 20:52:59.242676 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:52:59.258688 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:52:59.258713 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:52:59.258936 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:52:59.259127 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) Nov 12 20:52:59.259143 kernel: scsi host0: ahci Nov 12 20:52:59.259363 kernel: scsi host1: ahci Nov 12 20:52:59.259651 kernel: scsi host2: ahci Nov 12 20:52:59.259876 kernel: scsi host3: ahci Nov 12 20:52:59.260119 kernel: scsi host4: ahci Nov 12 20:52:59.260332 kernel: scsi host5: ahci Nov 12 20:52:59.260598 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 20:52:59.260616 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 20:52:59.260630 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 20:52:59.260651 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 20:52:59.260667 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 20:52:59.260681 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 20:52:59.257210 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:52:59.300716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:59.310595 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:52:59.321944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:52:59.323364 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:52:59.328120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:52:59.343746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:52:59.345939 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:52:59.353549 disk-uuid[552]: Primary Header is updated. Nov 12 20:52:59.353549 disk-uuid[552]: Secondary Entries is updated. Nov 12 20:52:59.353549 disk-uuid[552]: Secondary Header is updated. Nov 12 20:52:59.357894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:52:59.360504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:52:59.376168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:52:59.575393 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:52:59.575493 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:52:59.575508 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:52:59.575521 kernel: ata3.00: applying bridge limits Nov 12 20:52:59.575533 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:52:59.575546 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:52:59.576507 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:52:59.577502 kernel: ata3.00: configured for UDMA/100 Nov 12 20:52:59.578525 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:52:59.578627 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:52:59.624541 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:52:59.638396 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:52:59.638416 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:53:00.391400 disk-uuid[553]: The operation has completed successfully. Nov 12 20:53:00.392614 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:00.417578 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:00.417775 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:00.450663 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:00.454045 sh[590]: Success Nov 12 20:53:00.466511 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:53:00.503786 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:00.519255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:00.522509 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:00.535968 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:00.536015 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:00.536039 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:00.537172 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:00.537912 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:00.542703 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:00.544265 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:00.554647 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:00.571398 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:00.578586 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:00.578620 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:00.578635 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:00.582512 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:00.591942 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:00.593751 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:00.691689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:00.694136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:00.706653 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:00.708570 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:00.781607 systemd-networkd[771]: lo: Link UP Nov 12 20:53:00.781620 systemd-networkd[771]: lo: Gained carrier Nov 12 20:53:00.783612 systemd-networkd[771]: Enumeration completed Nov 12 20:53:00.784047 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:00.784051 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:00.784825 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:00.784976 systemd-networkd[771]: eth0: Link UP Nov 12 20:53:00.784980 systemd-networkd[771]: eth0: Gained carrier Nov 12 20:53:00.784987 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:00.793828 systemd[1]: Reached target network.target - Network. Nov 12 20:53:00.834796 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:00.904096 ignition[770]: Ignition 2.19.0 Nov 12 20:53:00.904110 ignition[770]: Stage: fetch-offline Nov 12 20:53:00.904159 ignition[770]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:00.904170 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:00.904365 ignition[770]: parsed url from cmdline: "" Nov 12 20:53:00.904370 ignition[770]: no config URL provided Nov 12 20:53:00.904376 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:00.904387 ignition[770]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:00.904438 ignition[770]: op(1): [started] loading QEMU firmware config module Nov 12 20:53:00.904450 ignition[770]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:53:00.913070 ignition[770]: op(1): [finished] loading QEMU firmware config module Nov 12 20:53:00.955235 ignition[770]: parsing config with SHA512: 3d5a50d16a96ea225000165e3ef58dca716d730d5ef12c01969796e90b2404ceb406ed3c0a0afe10a0634ca8ab3946d498f75f64d35761c972c8f45cbbdd7ce8 Nov 12 20:53:00.966559 unknown[770]: fetched base config from "system" Nov 12 20:53:00.966573 unknown[770]: fetched user config from "qemu" Nov 12 20:53:00.967512 ignition[770]: fetch-offline: fetch-offline passed Nov 12 20:53:00.967907 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.134 Nov 12 20:53:00.967618 ignition[770]: Ignition finished successfully Nov 12 20:53:00.967919 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Nov 12 20:53:00.970791 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:00.972579 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:53:00.981793 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:01.003287 ignition[782]: Ignition 2.19.0 Nov 12 20:53:01.003306 ignition[782]: Stage: kargs Nov 12 20:53:01.003547 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:01.003560 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:01.004377 ignition[782]: kargs: kargs passed Nov 12 20:53:01.004426 ignition[782]: Ignition finished successfully Nov 12 20:53:01.008874 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:01.025737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:01.101838 ignition[790]: Ignition 2.19.0 Nov 12 20:53:01.101851 ignition[790]: Stage: disks Nov 12 20:53:01.102137 ignition[790]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:01.102150 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:01.103346 ignition[790]: disks: disks passed Nov 12 20:53:01.105859 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:01.103398 ignition[790]: Ignition finished successfully Nov 12 20:53:01.107781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:01.109648 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:01.111705 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:01.113859 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:01.116107 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:01.127690 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:01.145052 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:53:01.152561 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:01.163925 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:01.285509 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:01.285623 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:01.286353 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:01.302626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:01.304858 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:01.305256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:53:01.311183 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Nov 12 20:53:01.305309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:01.316132 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:01.316166 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:01.316184 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:01.305355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:01.318503 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:01.320861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:01.328013 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:01.330026 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:01.373068 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:01.378793 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:01.384434 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:01.388916 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:01.485574 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:01.502627 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:01.505905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:01.514501 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:01.535738 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:01.536868 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:01.595049 ignition[928]: INFO : Ignition 2.19.0 Nov 12 20:53:01.595049 ignition[928]: INFO : Stage: mount Nov 12 20:53:01.596897 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:01.596897 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:01.596897 ignition[928]: INFO : mount: mount passed Nov 12 20:53:01.596897 ignition[928]: INFO : Ignition finished successfully Nov 12 20:53:01.603787 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:01.620621 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:01.631685 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:01.647126 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Nov 12 20:53:01.647160 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:01.647176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:01.648092 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:01.651519 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:01.653885 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:01.695461 ignition[956]: INFO : Ignition 2.19.0 Nov 12 20:53:01.695461 ignition[956]: INFO : Stage: files Nov 12 20:53:01.697368 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:01.697368 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:01.697368 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:01.701330 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:01.701330 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:01.701330 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:01.701330 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:01.707499 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:01.707499 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:01.707499 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:01.701467 unknown[956]: wrote ssh authorized keys file for user: core Nov 12 20:53:01.771051 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:53:01.849778 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:01.852077 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:53:01.852077 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:01.869819 systemd-networkd[771]: eth0: Gained IPv6LL Nov 12 20:53:02.230856 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:53:02.360693 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:53:02.360693 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:02.364889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:53:02.771456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:53:03.054440 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:53:03.054440 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 20:53:03.058917 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:03.085139 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:03.090702 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:03.092561 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:03.092561 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:03.092561 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:03.092561 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:03.092561 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:03.092561 ignition[956]: INFO : files: files passed Nov 12 20:53:03.092561 ignition[956]: INFO : Ignition finished successfully Nov 12 20:53:03.094978 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:03.105694 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:03.109838 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:03.113166 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:03.114453 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:03.122258 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:53:03.126745 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:03.126745 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:03.130251 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:03.134615 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:03.134990 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:03.158048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:03.189950 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:03.191087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:03.193916 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:03.196056 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:03.198227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:03.209790 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:03.227864 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:03.241925 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:03.258367 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:03.261614 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:03.264734 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:03.267164 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:03.268513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:03.271615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:03.273829 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:03.275822 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:03.278253 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:03.281139 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:03.283649 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:03.285887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:03.288446 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:03.290619 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:03.292695 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:03.294367 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:03.295433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:03.297808 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:03.300075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:03.302548 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:03.303517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:03.306121 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:03.307201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:03.309623 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:03.310722 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:03.313277 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:03.315192 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:03.315442 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:03.319188 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:03.320278 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:03.322213 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:03.322367 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:03.324109 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:03.324261 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:03.325085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:03.325251 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:03.328081 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:03.328233 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:03.338685 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:03.339666 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:03.341788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:03.342021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:03.343153 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:03.343543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:03.352405 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:03.352574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:03.376820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:03.412328 ignition[1011]: INFO : Ignition 2.19.0 Nov 12 20:53:03.412328 ignition[1011]: INFO : Stage: umount Nov 12 20:53:03.414542 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:03.414542 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:03.417577 ignition[1011]: INFO : umount: umount passed Nov 12 20:53:03.418665 ignition[1011]: INFO : Ignition finished successfully Nov 12 20:53:03.421298 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:03.421438 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:03.423651 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:03.425360 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:03.425442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:03.427622 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:03.427692 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:03.429661 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:03.429728 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:03.431827 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:03.431892 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:03.434118 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:03.436166 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:03.437416 systemd-networkd[771]: eth0: DHCPv6 lease lost Nov 12 20:53:03.441659 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:03.441812 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:03.443692 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:03.443834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:03.447384 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:03.447455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:03.461693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:03.463026 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:03.463113 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:03.465963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:03.466042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:03.468617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:03.468687 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:03.468819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:03.468879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:03.469382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:03.480208 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:03.480410 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:03.493986 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:03.494286 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:03.497150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:03.497219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:03.499626 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:03.499689 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:03.502092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:03.502185 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:03.505227 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:03.505325 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:03.507220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:03.507296 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:03.517715 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:03.519136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:03.519224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:03.522210 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:53:03.522301 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:03.525040 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:03.525109 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:03.526735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:03.526800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:03.530221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:03.530389 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:03.638461 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:03.638620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:03.641211 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:03.642521 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:03.642587 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:03.654817 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:03.664388 systemd[1]: Switching root. Nov 12 20:53:03.698710 systemd-journald[193]: Journal stopped Nov 12 20:53:05.039358 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 12 20:53:05.039463 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:53:05.039512 kernel: SELinux: policy capability open_perms=1 Nov 12 20:53:05.039531 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:53:05.039546 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:53:05.039565 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:53:05.039581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:53:05.039597 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:53:05.039613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:53:05.039633 kernel: audit: type=1403 audit(1731444784.235:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:53:05.039650 systemd[1]: Successfully loaded SELinux policy in 43.070ms. Nov 12 20:53:05.039684 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.378ms. Nov 12 20:53:05.039702 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:05.039720 systemd[1]: Detected virtualization kvm. Nov 12 20:53:05.039736 systemd[1]: Detected architecture x86-64. Nov 12 20:53:05.039753 systemd[1]: Detected first boot. Nov 12 20:53:05.039769 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:53:05.039786 zram_generator::config[1055]: No configuration found. Nov 12 20:53:05.039808 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:53:05.039825 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:53:05.039842 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:53:05.039867 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:05.039885 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:53:05.039902 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:53:05.039920 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:53:05.039936 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:53:05.039966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:53:05.039984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:53:05.040002 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:53:05.040018 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:53:05.040035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:05.040052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:05.040069 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:53:05.040085 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:53:05.040101 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:53:05.040128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:05.040145 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:53:05.040161 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:05.040177 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:53:05.040192 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:53:05.040209 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:05.040233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:53:05.040253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:05.040270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:05.040285 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:05.040300 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:05.040315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:53:05.040333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:53:05.040348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:05.040364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:05.040379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:05.040395 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:53:05.040414 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:53:05.040430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:53:05.040446 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:53:05.040461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:05.040494 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:53:05.040511 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:53:05.040537 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:53:05.040559 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:53:05.040586 systemd[1]: Reached target machines.target - Containers. Nov 12 20:53:05.040601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:53:05.040616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:05.040632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:05.040647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:53:05.040662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:05.040677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:05.040692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:05.040711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:53:05.040731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:05.040747 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:53:05.040764 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:53:05.040780 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:53:05.040796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:53:05.040812 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:53:05.040827 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:05.040843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:05.040859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:53:05.040878 kernel: loop: module loaded Nov 12 20:53:05.040893 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:53:05.040909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:05.040925 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:53:05.040941 systemd[1]: Stopped verity-setup.service. Nov 12 20:53:05.040959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:05.040975 kernel: fuse: init (API version 7.39) Nov 12 20:53:05.040991 kernel: ACPI: bus type drm_connector registered Nov 12 20:53:05.041031 systemd-journald[1125]: Collecting audit messages is disabled. Nov 12 20:53:05.041065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:53:05.041082 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:53:05.041101 systemd-journald[1125]: Journal started Nov 12 20:53:05.041130 systemd-journald[1125]: Runtime Journal (/run/log/journal/9a7908eff68945acbedb9425cbc4678f) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:53:04.773541 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:53:04.793057 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:53:04.793595 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:53:05.045690 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:05.046961 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:53:05.048696 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:53:05.050215 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:53:05.051695 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:53:05.053304 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:53:05.055190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:05.057645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:53:05.057922 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:53:05.060018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:05.060270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:05.062187 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:05.062395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:05.064263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:05.064522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:05.158674 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:53:05.158943 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:53:05.160931 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:05.161164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:05.163263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:05.165239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:53:05.167340 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:53:05.169757 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:05.194090 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:53:05.216810 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:53:05.220536 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:53:05.222019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:53:05.222068 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:05.224818 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:53:05.228155 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:53:05.231114 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:53:05.232580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:05.236182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:53:05.239804 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:53:05.241618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:05.243961 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:53:05.244109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:05.248731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:05.254748 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:53:05.261132 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:05.267654 systemd-journald[1125]: Time spent on flushing to /var/log/journal/9a7908eff68945acbedb9425cbc4678f is 31.386ms for 959 entries. Nov 12 20:53:05.267654 systemd-journald[1125]: System Journal (/var/log/journal/9a7908eff68945acbedb9425cbc4678f) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:53:05.358202 systemd-journald[1125]: Received client request to flush runtime journal. Nov 12 20:53:05.358260 kernel: loop0: detected capacity change from 0 to 205544 Nov 12 20:53:05.358282 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:53:05.267794 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:53:05.274450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:53:05.276093 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:53:05.277850 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:53:05.292119 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:53:05.292618 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:53:05.319361 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:53:05.338466 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:53:05.340731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:05.344604 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 12 20:53:05.344618 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 12 20:53:05.352368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:05.412618 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:53:05.414921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:53:05.433256 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:53:05.436992 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:53:05.434383 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:53:05.450388 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:53:05.458764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:05.475049 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:53:05.498102 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 12 20:53:05.498129 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 12 20:53:05.528118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:05.550512 kernel: loop3: detected capacity change from 0 to 205544 Nov 12 20:53:05.566526 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:53:05.579536 kernel: loop5: detected capacity change from 0 to 142488 Nov 12 20:53:05.628629 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:53:05.629384 (sd-merge)[1196]: Merged extensions into '/usr'. Nov 12 20:53:05.636872 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:53:05.636895 systemd[1]: Reloading... Nov 12 20:53:05.762510 zram_generator::config[1228]: No configuration found. Nov 12 20:53:05.842534 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:53:05.886171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:05.953916 systemd[1]: Reloading finished in 316 ms. Nov 12 20:53:05.986841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:53:05.988496 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:53:06.004813 systemd[1]: Starting ensure-sysext.service... Nov 12 20:53:06.007434 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:06.014808 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:53:06.014829 systemd[1]: Reloading... Nov 12 20:53:06.090332 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:53:06.090722 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:53:06.091783 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:53:06.092086 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 12 20:53:06.092168 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 12 20:53:06.097178 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:06.097211 systemd-tmpfiles[1260]: Skipping /boot Nov 12 20:53:06.115740 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:06.115757 systemd-tmpfiles[1260]: Skipping /boot Nov 12 20:53:06.117524 zram_generator::config[1289]: No configuration found. Nov 12 20:53:06.257140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:06.324347 systemd[1]: Reloading finished in 309 ms. Nov 12 20:53:06.347768 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:53:06.366030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:06.377707 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:06.381057 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:53:06.383899 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:53:06.388795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:06.392330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:06.404941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:53:06.430591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.430968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:06.432456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:06.435777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:06.441018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:06.442615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:06.444648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:53:06.445983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.447289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:06.447604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:06.454120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:06.454427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:06.458515 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:53:06.460767 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:06.460992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:06.461429 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Nov 12 20:53:06.470413 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.470616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:06.474220 augenrules[1354]: No rules Nov 12 20:53:06.476893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:06.480510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:06.484468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:06.485800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:06.489864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:53:06.491074 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.492798 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:06.495172 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:53:06.497185 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:53:06.499239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:06.508889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:06.509352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:06.512328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:06.512945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:06.516272 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:06.516457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:06.521374 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:53:06.540250 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:53:06.547447 systemd[1]: Finished ensure-sysext.service. Nov 12 20:53:06.555444 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:53:06.556284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.556511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:06.564828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:06.571814 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:06.580509 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Nov 12 20:53:06.583513 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1370) Nov 12 20:53:06.587064 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:06.588540 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Nov 12 20:53:06.590945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:06.592443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:06.595325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:06.604304 systemd-resolved[1329]: Positive Trust Anchors: Nov 12 20:53:06.604322 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:06.604354 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:06.604675 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:53:06.606065 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:53:06.606111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:06.606858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:06.607077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:06.608890 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:06.609107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:06.610833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:06.611044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:06.612898 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:06.613093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:06.613512 systemd-resolved[1329]: Defaulting to hostname 'linux'. Nov 12 20:53:06.616926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:06.636589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:06.652592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:06.652700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:06.685506 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:53:06.691517 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:53:06.710576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:53:06.738517 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:53:06.736721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:53:06.753045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:53:06.755172 systemd-networkd[1405]: lo: Link UP Nov 12 20:53:06.755195 systemd-networkd[1405]: lo: Gained carrier Nov 12 20:53:06.757581 systemd-networkd[1405]: Enumeration completed Nov 12 20:53:06.757708 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:06.759072 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:53:06.759249 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:06.759261 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:06.760623 systemd[1]: Reached target network.target - Network. Nov 12 20:53:06.761572 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:53:06.762043 systemd-networkd[1405]: eth0: Link UP Nov 12 20:53:06.762056 systemd-networkd[1405]: eth0: Gained carrier Nov 12 20:53:06.762069 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:06.773647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:53:06.793735 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:53:06.794629 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:53:06.794819 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:53:06.783557 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:06.784637 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Nov 12 20:53:06.806492 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:53:06.806596 systemd-timesyncd[1406]: Initial clock synchronization to Tue 2024-11-12 20:53:06.830415 UTC. Nov 12 20:53:06.857795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:06.892654 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:53:06.903888 kernel: kvm_amd: TSC scaling supported Nov 12 20:53:06.903954 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:53:06.904039 kernel: kvm_amd: Nested Paging enabled Nov 12 20:53:06.904059 kernel: kvm_amd: LBR virtualization supported Nov 12 20:53:06.904502 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:53:06.904531 kernel: kvm_amd: Virtual GIF supported Nov 12 20:53:06.924518 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:53:06.971374 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:53:07.003698 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:53:07.005413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:07.016711 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:07.063171 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:53:07.064807 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:07.065953 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:07.067149 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:53:07.068430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:53:07.069900 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:53:07.071138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:53:07.072398 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:53:07.073656 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:53:07.073685 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:07.074661 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:07.076497 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:53:07.079700 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:53:07.089590 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:53:07.092769 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:53:07.094933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:53:07.096440 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:07.097814 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:07.099058 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:07.099083 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:07.108684 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:53:07.111408 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:53:07.113795 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:53:07.115966 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:07.117205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:53:07.120157 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:53:07.122926 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:53:07.128641 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:53:07.132611 jq[1438]: false Nov 12 20:53:07.134930 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:53:07.147165 extend-filesystems[1439]: Found loop3 Nov 12 20:53:07.147165 extend-filesystems[1439]: Found loop4 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found loop5 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found sr0 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda1 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda2 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda3 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found usr Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda4 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda6 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda7 Nov 12 20:53:07.164120 extend-filesystems[1439]: Found vda9 Nov 12 20:53:07.164120 extend-filesystems[1439]: Checking size of /dev/vda9 Nov 12 20:53:07.150884 dbus-daemon[1437]: [system] SELinux support is enabled Nov 12 20:53:07.148724 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:53:07.155656 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:53:07.157737 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:53:07.158461 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:53:07.160750 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:53:07.165063 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:53:07.183902 update_engine[1453]: I20241112 20:53:07.183532 1453 main.cc:92] Flatcar Update Engine starting Nov 12 20:53:07.174108 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:53:07.181280 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:53:07.186225 update_engine[1453]: I20241112 20:53:07.186183 1453 update_check_scheduler.cc:74] Next update check in 5m59s Nov 12 20:53:07.188297 extend-filesystems[1439]: Resized partition /dev/vda9 Nov 12 20:53:07.190500 jq[1455]: true Nov 12 20:53:07.198538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:53:07.199029 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:53:07.206619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1377) Nov 12 20:53:07.199053 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:53:07.199705 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:53:07.199994 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:53:07.205171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:53:07.205462 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:53:07.229062 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:53:07.233681 jq[1464]: true Nov 12 20:53:07.248189 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:53:07.258022 tar[1462]: linux-amd64/helm Nov 12 20:53:07.261847 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:53:07.263649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:53:07.263684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:53:07.278976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:53:07.279008 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:53:07.299855 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:53:07.302188 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:53:07.302221 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:53:07.302592 systemd-logind[1451]: New seat seat0. Nov 12 20:53:07.303815 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:53:07.483946 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:53:07.527906 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:53:07.532821 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:53:07.572524 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:53:07.573936 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:53:07.585538 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:53:07.585819 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:53:07.589546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:53:07.617397 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:53:07.661034 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:53:07.668290 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:53:07.669757 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:53:07.755530 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:53:07.755530 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:53:07.755530 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:53:07.761151 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Nov 12 20:53:07.755677 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:53:07.758526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:53:07.764946 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:53:07.767256 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:53:07.769613 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:53:07.847741 containerd[1465]: time="2024-11-12T20:53:07.847628614Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:53:07.872535 containerd[1465]: time="2024-11-12T20:53:07.872429668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.874654 containerd[1465]: time="2024-11-12T20:53:07.874605498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:07.874654 containerd[1465]: time="2024-11-12T20:53:07.874640065Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:53:07.874726 containerd[1465]: time="2024-11-12T20:53:07.874657995Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:53:07.874938 containerd[1465]: time="2024-11-12T20:53:07.874910911Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:53:07.874963 containerd[1465]: time="2024-11-12T20:53:07.874935655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875043 containerd[1465]: time="2024-11-12T20:53:07.875022950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875064 containerd[1465]: time="2024-11-12T20:53:07.875041984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875329 containerd[1465]: time="2024-11-12T20:53:07.875298291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875329 containerd[1465]: time="2024-11-12T20:53:07.875321811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875379 containerd[1465]: time="2024-11-12T20:53:07.875338287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875379 containerd[1465]: time="2024-11-12T20:53:07.875350699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875502 containerd[1465]: time="2024-11-12T20:53:07.875456407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875838 containerd[1465]: time="2024-11-12T20:53:07.875806771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875980 containerd[1465]: time="2024-11-12T20:53:07.875953337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:07.875980 containerd[1465]: time="2024-11-12T20:53:07.875973355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:53:07.876109 containerd[1465]: time="2024-11-12T20:53:07.876084983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:53:07.876176 containerd[1465]: time="2024-11-12T20:53:07.876153223Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:53:07.976342 tar[1462]: linux-amd64/LICENSE Nov 12 20:53:07.976523 tar[1462]: linux-amd64/README.md Nov 12 20:53:08.002122 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:53:08.186665 containerd[1465]: time="2024-11-12T20:53:08.186571762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:53:08.186665 containerd[1465]: time="2024-11-12T20:53:08.186661166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:53:08.186665 containerd[1465]: time="2024-11-12T20:53:08.186678173Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:53:08.186869 containerd[1465]: time="2024-11-12T20:53:08.186695409Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:53:08.186869 containerd[1465]: time="2024-11-12T20:53:08.186713087Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:53:08.187008 containerd[1465]: time="2024-11-12T20:53:08.186965025Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:53:08.187259 containerd[1465]: time="2024-11-12T20:53:08.187226656Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:53:08.187388 containerd[1465]: time="2024-11-12T20:53:08.187348677Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:53:08.187388 containerd[1465]: time="2024-11-12T20:53:08.187376538Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:53:08.187440 containerd[1465]: time="2024-11-12T20:53:08.187390936Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:53:08.187440 containerd[1465]: time="2024-11-12T20:53:08.187404571Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187440 containerd[1465]: time="2024-11-12T20:53:08.187429553Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187524 containerd[1465]: time="2024-11-12T20:53:08.187443299Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187524 containerd[1465]: time="2024-11-12T20:53:08.187456311Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187561 containerd[1465]: time="2024-11-12T20:53:08.187469695Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187561 containerd[1465]: time="2024-11-12T20:53:08.187538933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187561 containerd[1465]: time="2024-11-12T20:53:08.187551775Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187622 containerd[1465]: time="2024-11-12T20:53:08.187563794Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:53:08.187622 containerd[1465]: time="2024-11-12T20:53:08.187583690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187622 containerd[1465]: time="2024-11-12T20:53:08.187601398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187681 containerd[1465]: time="2024-11-12T20:53:08.187614642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187681 containerd[1465]: time="2024-11-12T20:53:08.187635259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187681 containerd[1465]: time="2024-11-12T20:53:08.187647209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187681 containerd[1465]: time="2024-11-12T20:53:08.187660392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187681 containerd[1465]: time="2024-11-12T20:53:08.187674067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187686649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187699531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187712434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187723690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187736051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187748613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187770 containerd[1465]: time="2024-11-12T20:53:08.187762669Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:53:08.187912 containerd[1465]: time="2024-11-12T20:53:08.187785664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187912 containerd[1465]: time="2024-11-12T20:53:08.187797985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.187912 containerd[1465]: time="2024-11-12T20:53:08.187808198Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:53:08.188817 containerd[1465]: time="2024-11-12T20:53:08.188757962Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:53:08.188856 containerd[1465]: time="2024-11-12T20:53:08.188831112Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:53:08.188856 containerd[1465]: time="2024-11-12T20:53:08.188848630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:53:08.188896 containerd[1465]: time="2024-11-12T20:53:08.188865114Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:53:08.188896 containerd[1465]: time="2024-11-12T20:53:08.188877766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.188957 containerd[1465]: time="2024-11-12T20:53:08.188899206Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:53:08.188957 containerd[1465]: time="2024-11-12T20:53:08.188918851Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:53:08.188957 containerd[1465]: time="2024-11-12T20:53:08.188940973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:53:08.189316 containerd[1465]: time="2024-11-12T20:53:08.189254163Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:53:08.189316 containerd[1465]: time="2024-11-12T20:53:08.189314392Z" level=info msg="Connect containerd service" Nov 12 20:53:08.189469 containerd[1465]: time="2024-11-12T20:53:08.189372041Z" level=info msg="using legacy CRI server" Nov 12 20:53:08.189469 containerd[1465]: time="2024-11-12T20:53:08.189381131Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:53:08.189522 containerd[1465]: time="2024-11-12T20:53:08.189506725Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:53:08.190307 containerd[1465]: time="2024-11-12T20:53:08.190270044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:53:08.190507 containerd[1465]: time="2024-11-12T20:53:08.190451652Z" level=info msg="Start subscribing containerd event" Nov 12 20:53:08.190551 containerd[1465]: time="2024-11-12T20:53:08.190525936Z" level=info msg="Start recovering state" Nov 12 20:53:08.190617 containerd[1465]: time="2024-11-12T20:53:08.190601605Z" level=info msg="Start event monitor" Nov 12 20:53:08.190640 containerd[1465]: time="2024-11-12T20:53:08.190622454Z" level=info msg="Start snapshots syncer" Nov 12 20:53:08.190640 containerd[1465]: time="2024-11-12T20:53:08.190636370Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:53:08.190677 containerd[1465]: time="2024-11-12T20:53:08.190647436Z" level=info msg="Start streaming server" Nov 12 20:53:08.190746 containerd[1465]: time="2024-11-12T20:53:08.190724309Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:53:08.190810 containerd[1465]: time="2024-11-12T20:53:08.190794158Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:53:08.190906 containerd[1465]: time="2024-11-12T20:53:08.190880743Z" level=info msg="containerd successfully booted in 0.344696s" Nov 12 20:53:08.191016 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:53:08.589836 systemd-networkd[1405]: eth0: Gained IPv6LL Nov 12 20:53:08.593210 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:53:08.595300 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:53:08.606804 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:53:08.609878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:08.612563 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:53:08.639909 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:53:08.642258 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:53:08.642471 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:53:08.646083 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:53:09.282647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:09.284819 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:53:09.286454 systemd[1]: Startup finished in 972ms (kernel) + 6.499s (initrd) + 5.092s (userspace) = 12.563s. Nov 12 20:53:09.287721 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:09.849192 kubelet[1549]: E1112 20:53:09.849024 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:09.853268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:09.853540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:09.853867 systemd[1]: kubelet.service: Consumed 1.097s CPU time. Nov 12 20:53:13.256992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:53:13.258775 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:54444.service - OpenSSH per-connection server daemon (10.0.0.1:54444). Nov 12 20:53:14.022345 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 54444 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.024837 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.034561 systemd-logind[1451]: New session 1 of user core. Nov 12 20:53:14.036176 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:53:14.053713 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:53:14.067420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:53:14.070427 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:53:14.090532 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:53:14.251651 systemd[1566]: Queued start job for default target default.target. Nov 12 20:53:14.266133 systemd[1566]: Created slice app.slice - User Application Slice. Nov 12 20:53:14.266166 systemd[1566]: Reached target paths.target - Paths. Nov 12 20:53:14.266180 systemd[1566]: Reached target timers.target - Timers. Nov 12 20:53:14.268009 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:53:14.280754 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:53:14.280919 systemd[1566]: Reached target sockets.target - Sockets. Nov 12 20:53:14.280940 systemd[1566]: Reached target basic.target - Basic System. Nov 12 20:53:14.280980 systemd[1566]: Reached target default.target - Main User Target. Nov 12 20:53:14.281019 systemd[1566]: Startup finished in 182ms. Nov 12 20:53:14.281661 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:53:14.283720 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:53:14.345587 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Nov 12 20:53:14.386446 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.387958 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.391985 systemd-logind[1451]: New session 2 of user core. Nov 12 20:53:14.407659 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:53:14.462790 sshd[1577]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:14.470947 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:54460.service: Deactivated successfully. Nov 12 20:53:14.473477 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:53:14.475338 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:53:14.488967 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:54472.service - OpenSSH per-connection server daemon (10.0.0.1:54472). Nov 12 20:53:14.490214 systemd-logind[1451]: Removed session 2. Nov 12 20:53:14.523923 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.526068 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.530738 systemd-logind[1451]: New session 3 of user core. Nov 12 20:53:14.540691 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:53:14.592781 sshd[1584]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:14.604294 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:54472.service: Deactivated successfully. Nov 12 20:53:14.606143 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:53:14.607652 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:53:14.618073 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:54482.service - OpenSSH per-connection server daemon (10.0.0.1:54482). Nov 12 20:53:14.619543 systemd-logind[1451]: Removed session 3. Nov 12 20:53:14.655126 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 54482 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.657256 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.661868 systemd-logind[1451]: New session 4 of user core. Nov 12 20:53:14.677638 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:53:14.734504 sshd[1591]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:14.745514 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:54482.service: Deactivated successfully. Nov 12 20:53:14.747589 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:53:14.749415 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:53:14.756910 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:54498.service - OpenSSH per-connection server daemon (10.0.0.1:54498). Nov 12 20:53:14.758168 systemd-logind[1451]: Removed session 4. Nov 12 20:53:14.793411 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 54498 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.795344 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.799847 systemd-logind[1451]: New session 5 of user core. Nov 12 20:53:14.814673 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:53:14.875588 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:53:14.876091 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:14.895071 sudo[1601]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:14.897575 sshd[1598]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:14.909450 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:54498.service: Deactivated successfully. Nov 12 20:53:14.911332 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:53:14.913203 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:53:14.924956 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:54500.service - OpenSSH per-connection server daemon (10.0.0.1:54500). Nov 12 20:53:14.926069 systemd-logind[1451]: Removed session 5. Nov 12 20:53:14.963773 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54500 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:14.965626 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:14.969848 systemd-logind[1451]: New session 6 of user core. Nov 12 20:53:14.979631 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:53:15.035925 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:53:15.036319 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:15.041017 sudo[1610]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:15.048659 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:53:15.049074 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:15.065724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:15.069237 auditctl[1613]: No rules Nov 12 20:53:15.070700 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:53:15.071021 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:15.073075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:15.112016 augenrules[1631]: No rules Nov 12 20:53:15.114108 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:15.115440 sudo[1609]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:15.117672 sshd[1606]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:15.130853 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:54500.service: Deactivated successfully. Nov 12 20:53:15.133167 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:53:15.134814 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:53:15.145119 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:54512.service - OpenSSH per-connection server daemon (10.0.0.1:54512). Nov 12 20:53:15.146443 systemd-logind[1451]: Removed session 6. Nov 12 20:53:15.180117 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54512 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:15.182301 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:15.187580 systemd-logind[1451]: New session 7 of user core. Nov 12 20:53:15.203835 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:53:15.259856 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:53:15.260312 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:15.877762 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:53:15.881867 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:53:16.652213 dockerd[1660]: time="2024-11-12T20:53:16.652099886Z" level=info msg="Starting up" Nov 12 20:53:17.268317 dockerd[1660]: time="2024-11-12T20:53:17.268256268Z" level=info msg="Loading containers: start." Nov 12 20:53:17.391512 kernel: Initializing XFRM netlink socket Nov 12 20:53:17.482173 systemd-networkd[1405]: docker0: Link UP Nov 12 20:53:17.511856 dockerd[1660]: time="2024-11-12T20:53:17.511750162Z" level=info msg="Loading containers: done." Nov 12 20:53:17.588417 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2834849177-merged.mount: Deactivated successfully. Nov 12 20:53:17.623927 dockerd[1660]: time="2024-11-12T20:53:17.623859334Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:53:17.624095 dockerd[1660]: time="2024-11-12T20:53:17.624019661Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:53:17.624233 dockerd[1660]: time="2024-11-12T20:53:17.624203230Z" level=info msg="Daemon has completed initialization" Nov 12 20:53:17.932470 dockerd[1660]: time="2024-11-12T20:53:17.932270350Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:53:17.933850 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:53:18.431828 containerd[1465]: time="2024-11-12T20:53:18.431672703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:53:19.573717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681406785.mount: Deactivated successfully. Nov 12 20:53:20.103864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:20.116687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:20.271645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:20.278794 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:20.853514 kubelet[1839]: E1112 20:53:20.853424 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:20.860336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:20.860585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:21.331068 containerd[1465]: time="2024-11-12T20:53:21.330912562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:21.334061 containerd[1465]: time="2024-11-12T20:53:21.334014979Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:53:21.335720 containerd[1465]: time="2024-11-12T20:53:21.335680502Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:21.341190 containerd[1465]: time="2024-11-12T20:53:21.341157234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:21.342634 containerd[1465]: time="2024-11-12T20:53:21.342595131Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 2.910871686s" Nov 12 20:53:21.342714 containerd[1465]: time="2024-11-12T20:53:21.342641537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:53:21.344019 containerd[1465]: time="2024-11-12T20:53:21.343992417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:53:23.201650 containerd[1465]: time="2024-11-12T20:53:23.201572619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:23.391992 containerd[1465]: time="2024-11-12T20:53:23.391905366Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:53:23.395070 containerd[1465]: time="2024-11-12T20:53:23.394992397Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:23.403151 containerd[1465]: time="2024-11-12T20:53:23.403094190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:23.404359 containerd[1465]: time="2024-11-12T20:53:23.404289165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 2.060253421s" Nov 12 20:53:23.404406 containerd[1465]: time="2024-11-12T20:53:23.404361549Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:53:23.405018 containerd[1465]: time="2024-11-12T20:53:23.404985294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:53:25.822459 containerd[1465]: time="2024-11-12T20:53:25.822374790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:25.823183 containerd[1465]: time="2024-11-12T20:53:25.823087585Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:53:25.824509 containerd[1465]: time="2024-11-12T20:53:25.824460359Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:25.827439 containerd[1465]: time="2024-11-12T20:53:25.827368730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:25.828912 containerd[1465]: time="2024-11-12T20:53:25.828853256Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 2.423826964s" Nov 12 20:53:25.828912 containerd[1465]: time="2024-11-12T20:53:25.828903524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:53:25.829501 containerd[1465]: time="2024-11-12T20:53:25.829454619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:53:27.419972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647974436.mount: Deactivated successfully. Nov 12 20:53:28.926567 containerd[1465]: time="2024-11-12T20:53:28.926447826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:28.933705 containerd[1465]: time="2024-11-12T20:53:28.933666341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:53:28.945182 containerd[1465]: time="2024-11-12T20:53:28.945126575Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:28.957525 containerd[1465]: time="2024-11-12T20:53:28.957315409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:28.958147 containerd[1465]: time="2024-11-12T20:53:28.958077976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 3.128549696s" Nov 12 20:53:28.958147 containerd[1465]: time="2024-11-12T20:53:28.958125575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:53:28.958882 containerd[1465]: time="2024-11-12T20:53:28.958815266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:53:29.690389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183307641.mount: Deactivated successfully. Nov 12 20:53:30.773011 containerd[1465]: time="2024-11-12T20:53:30.772928754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:30.776553 containerd[1465]: time="2024-11-12T20:53:30.776494926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:53:30.778836 containerd[1465]: time="2024-11-12T20:53:30.778769542Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:30.782886 containerd[1465]: time="2024-11-12T20:53:30.782842267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:30.784201 containerd[1465]: time="2024-11-12T20:53:30.784167988Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.825318615s" Nov 12 20:53:30.784238 containerd[1465]: time="2024-11-12T20:53:30.784201964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:53:30.784846 containerd[1465]: time="2024-11-12T20:53:30.784795260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:53:31.111039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:53:31.121670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:31.307969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:31.433851 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:31.613074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719085275.mount: Deactivated successfully. Nov 12 20:53:31.618431 kubelet[1950]: E1112 20:53:31.618337 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:31.623369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:31.623627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:32.246145 containerd[1465]: time="2024-11-12T20:53:32.246052874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:32.247110 containerd[1465]: time="2024-11-12T20:53:32.247033359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:53:32.248701 containerd[1465]: time="2024-11-12T20:53:32.248645679Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:32.251399 containerd[1465]: time="2024-11-12T20:53:32.251350518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:32.252086 containerd[1465]: time="2024-11-12T20:53:32.252044128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.46720884s" Nov 12 20:53:32.252086 containerd[1465]: time="2024-11-12T20:53:32.252077010Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:53:32.252623 containerd[1465]: time="2024-11-12T20:53:32.252588012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:53:37.069241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400497003.mount: Deactivated successfully. Nov 12 20:53:41.091744 containerd[1465]: time="2024-11-12T20:53:41.091662174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:41.140004 containerd[1465]: time="2024-11-12T20:53:41.139885223Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:53:41.240648 containerd[1465]: time="2024-11-12T20:53:41.240568121Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:41.275050 containerd[1465]: time="2024-11-12T20:53:41.274960367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:41.276886 containerd[1465]: time="2024-11-12T20:53:41.276749897Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.024118714s" Nov 12 20:53:41.276947 containerd[1465]: time="2024-11-12T20:53:41.276889842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:53:41.687633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:53:41.705880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:41.866425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:41.871451 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:41.928426 kubelet[2045]: E1112 20:53:41.928329 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:41.932826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:41.933047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:44.267302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:44.277894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:44.308886 systemd[1]: Reloading requested from client PID 2061 ('systemctl') (unit session-7.scope)... Nov 12 20:53:44.308902 systemd[1]: Reloading... Nov 12 20:53:44.418540 zram_generator::config[2109]: No configuration found. Nov 12 20:53:45.058363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:45.136016 systemd[1]: Reloading finished in 826 ms. Nov 12 20:53:45.188217 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:45.192160 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:53:45.192451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:45.206819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:45.360918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:45.367335 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:53:45.407005 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:45.407005 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:53:45.407005 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:45.407522 kubelet[2150]: I1112 20:53:45.407040 2150 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:53:45.879749 kubelet[2150]: I1112 20:53:45.879534 2150 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:53:45.879749 kubelet[2150]: I1112 20:53:45.879597 2150 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:53:45.880667 kubelet[2150]: I1112 20:53:45.880612 2150 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:53:45.973043 kubelet[2150]: I1112 20:53:45.972975 2150 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:53:45.985443 kubelet[2150]: E1112 20:53:45.985398 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:45.992324 kubelet[2150]: E1112 20:53:45.992298 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:53:45.992324 kubelet[2150]: I1112 20:53:45.992323 2150 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:53:46.000334 kubelet[2150]: I1112 20:53:46.000297 2150 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:53:46.002092 kubelet[2150]: I1112 20:53:46.002062 2150 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:53:46.002242 kubelet[2150]: I1112 20:53:46.002207 2150 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:53:46.002380 kubelet[2150]: I1112 20:53:46.002234 2150 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:53:46.002470 kubelet[2150]: I1112 20:53:46.002388 2150 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:53:46.002470 kubelet[2150]: I1112 20:53:46.002397 2150 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:53:46.002547 kubelet[2150]: I1112 20:53:46.002533 2150 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:46.004035 kubelet[2150]: I1112 20:53:46.004009 2150 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:53:46.004035 kubelet[2150]: I1112 20:53:46.004031 2150 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:53:46.004095 kubelet[2150]: I1112 20:53:46.004071 2150 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:53:46.004095 kubelet[2150]: I1112 20:53:46.004085 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:53:46.011554 kubelet[2150]: W1112 20:53:46.011362 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:46.011554 kubelet[2150]: E1112 20:53:46.011418 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:46.011554 kubelet[2150]: W1112 20:53:46.011504 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:46.011554 kubelet[2150]: E1112 20:53:46.011530 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:46.015254 kubelet[2150]: I1112 20:53:46.015224 2150 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:53:46.029458 kubelet[2150]: I1112 20:53:46.029430 2150 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:53:46.030225 kubelet[2150]: W1112 20:53:46.030197 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:53:46.031147 kubelet[2150]: I1112 20:53:46.030917 2150 server.go:1269] "Started kubelet" Nov 12 20:53:46.031303 kubelet[2150]: I1112 20:53:46.031253 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:53:46.031513 kubelet[2150]: I1112 20:53:46.031385 2150 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:53:46.031862 kubelet[2150]: I1112 20:53:46.031842 2150 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:53:46.032493 kubelet[2150]: I1112 20:53:46.032459 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:53:46.032591 kubelet[2150]: I1112 20:53:46.032570 2150 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:53:46.032662 kubelet[2150]: I1112 20:53:46.032640 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:53:46.033944 kubelet[2150]: I1112 20:53:46.033504 2150 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:53:46.033944 kubelet[2150]: I1112 20:53:46.033603 2150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:53:46.033944 kubelet[2150]: I1112 20:53:46.033670 2150 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:53:46.034067 kubelet[2150]: W1112 20:53:46.034024 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:46.034129 kubelet[2150]: E1112 20:53:46.034078 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:46.034393 kubelet[2150]: E1112 20:53:46.034358 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.034451 kubelet[2150]: E1112 20:53:46.034436 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Nov 12 20:53:46.034592 kubelet[2150]: I1112 20:53:46.034568 2150 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:53:46.034687 kubelet[2150]: I1112 20:53:46.034669 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:53:46.035052 kubelet[2150]: E1112 20:53:46.035026 2150 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:53:46.035560 kubelet[2150]: I1112 20:53:46.035537 2150 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:53:46.049739 kubelet[2150]: I1112 20:53:46.049682 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:53:46.051019 kubelet[2150]: I1112 20:53:46.050984 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:53:46.051019 kubelet[2150]: I1112 20:53:46.051009 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:53:46.051113 kubelet[2150]: I1112 20:53:46.051032 2150 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:53:46.051113 kubelet[2150]: E1112 20:53:46.051082 2150 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:53:46.056970 kubelet[2150]: W1112 20:53:46.056884 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:46.056970 kubelet[2150]: E1112 20:53:46.056944 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:46.063008 kubelet[2150]: I1112 20:53:46.062972 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:53:46.063008 kubelet[2150]: I1112 20:53:46.062988 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:53:46.063008 kubelet[2150]: I1112 20:53:46.063003 2150 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:46.135426 kubelet[2150]: E1112 20:53:46.135310 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.151160 kubelet[2150]: E1112 20:53:46.151130 2150 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:53:46.235598 kubelet[2150]: E1112 20:53:46.235527 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.235928 kubelet[2150]: E1112 20:53:46.235889 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Nov 12 20:53:46.336498 kubelet[2150]: E1112 20:53:46.336403 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.351619 kubelet[2150]: E1112 20:53:46.351585 2150 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:53:46.437143 kubelet[2150]: E1112 20:53:46.437046 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.538025 kubelet[2150]: E1112 20:53:46.537989 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.636882 kubelet[2150]: E1112 20:53:46.636823 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Nov 12 20:53:46.638947 kubelet[2150]: E1112 20:53:46.638915 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.739460 kubelet[2150]: E1112 20:53:46.739362 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.752566 kubelet[2150]: E1112 20:53:46.752527 2150 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:53:46.839937 kubelet[2150]: E1112 20:53:46.839897 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.852059 kubelet[2150]: E1112 20:53:46.849930 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753e54bc25e11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:53:46.030890513 +0000 UTC m=+0.658957164,LastTimestamp:2024-11-12 20:53:46.030890513 +0000 UTC m=+0.658957164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:53:46.865306 kubelet[2150]: I1112 20:53:46.865270 2150 policy_none.go:49] "None policy: Start" Nov 12 20:53:46.865960 kubelet[2150]: I1112 20:53:46.865927 2150 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:53:46.865960 kubelet[2150]: I1112 20:53:46.865952 2150 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:53:46.940585 kubelet[2150]: E1112 20:53:46.940524 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:46.949230 kubelet[2150]: W1112 20:53:46.949198 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:46.949287 kubelet[2150]: E1112 20:53:46.949243 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:46.999828 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:53:47.025380 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:53:47.028847 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:53:47.039551 kubelet[2150]: I1112 20:53:47.039506 2150 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:53:47.039796 kubelet[2150]: I1112 20:53:47.039778 2150 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:53:47.039872 kubelet[2150]: I1112 20:53:47.039797 2150 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:53:47.040310 kubelet[2150]: I1112 20:53:47.040032 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:53:47.041212 kubelet[2150]: E1112 20:53:47.041178 2150 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:53:47.141324 kubelet[2150]: I1112 20:53:47.141277 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:47.141693 kubelet[2150]: E1112 20:53:47.141660 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Nov 12 20:53:47.148198 kubelet[2150]: W1112 20:53:47.148170 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:47.148263 kubelet[2150]: E1112 20:53:47.148214 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:47.343634 kubelet[2150]: I1112 20:53:47.343475 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:47.344000 kubelet[2150]: E1112 20:53:47.343891 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Nov 12 20:53:47.384394 kubelet[2150]: W1112 20:53:47.384360 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:47.384539 kubelet[2150]: E1112 20:53:47.384401 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:47.419995 kubelet[2150]: W1112 20:53:47.419944 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:47.420042 kubelet[2150]: E1112 20:53:47.419997 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:47.437762 kubelet[2150]: E1112 20:53:47.437729 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Nov 12 20:53:47.561842 systemd[1]: Created slice kubepods-burstable-pod19669745251020b21e8802350b4127f3.slice - libcontainer container kubepods-burstable-pod19669745251020b21e8802350b4127f3.slice. Nov 12 20:53:47.579189 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice. Nov 12 20:53:47.583390 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice. Nov 12 20:53:47.642442 kubelet[2150]: I1112 20:53:47.642255 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:47.642442 kubelet[2150]: I1112 20:53:47.642306 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:47.642442 kubelet[2150]: I1112 20:53:47.642330 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:47.642442 kubelet[2150]: I1112 20:53:47.642352 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:47.642442 kubelet[2150]: I1112 20:53:47.642381 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:47.642787 kubelet[2150]: I1112 20:53:47.642401 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:47.642787 kubelet[2150]: I1112 20:53:47.642422 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:47.642787 kubelet[2150]: I1112 20:53:47.642440 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:47.642787 kubelet[2150]: I1112 20:53:47.642458 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:53:47.746177 kubelet[2150]: I1112 20:53:47.746107 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:47.746612 kubelet[2150]: E1112 20:53:47.746561 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Nov 12 20:53:47.877497 kubelet[2150]: E1112 20:53:47.877419 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:47.878280 containerd[1465]: time="2024-11-12T20:53:47.878224722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19669745251020b21e8802350b4127f3,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:47.881814 kubelet[2150]: E1112 20:53:47.881644 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:47.882120 containerd[1465]: time="2024-11-12T20:53:47.882081180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:47.885532 kubelet[2150]: E1112 20:53:47.885492 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:47.885856 containerd[1465]: time="2024-11-12T20:53:47.885818370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:47.998017 kubelet[2150]: E1112 20:53:47.997850 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:48.438221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221790993.mount: Deactivated successfully. Nov 12 20:53:48.448985 containerd[1465]: time="2024-11-12T20:53:48.448910442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:53:48.452351 containerd[1465]: time="2024-11-12T20:53:48.452265779Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:53:48.453141 containerd[1465]: time="2024-11-12T20:53:48.453096296Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:53:48.454180 containerd[1465]: time="2024-11-12T20:53:48.454077322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:53:48.455058 containerd[1465]: time="2024-11-12T20:53:48.455018028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:53:48.456003 containerd[1465]: time="2024-11-12T20:53:48.455963003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:53:48.457099 containerd[1465]: time="2024-11-12T20:53:48.457068616Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:53:48.461884 containerd[1465]: time="2024-11-12T20:53:48.461833458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:53:48.463008 containerd[1465]: time="2024-11-12T20:53:48.462973079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.644871ms" Nov 12 20:53:48.466467 containerd[1465]: time="2024-11-12T20:53:48.466285029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 580.397752ms" Nov 12 20:53:48.467147 containerd[1465]: time="2024-11-12T20:53:48.467110356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.958154ms" Nov 12 20:53:48.548650 kubelet[2150]: I1112 20:53:48.548597 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:48.549225 kubelet[2150]: E1112 20:53:48.549008 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Nov 12 20:53:48.597337 containerd[1465]: time="2024-11-12T20:53:48.597193788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:48.597337 containerd[1465]: time="2024-11-12T20:53:48.597328536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:48.597337 containerd[1465]: time="2024-11-12T20:53:48.597358265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.598649 containerd[1465]: time="2024-11-12T20:53:48.598467435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.602515 containerd[1465]: time="2024-11-12T20:53:48.602407040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:48.602573 containerd[1465]: time="2024-11-12T20:53:48.602537890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:48.602611 containerd[1465]: time="2024-11-12T20:53:48.602565565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.602732 containerd[1465]: time="2024-11-12T20:53:48.602602578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:48.603148 containerd[1465]: time="2024-11-12T20:53:48.602799779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.603924 containerd[1465]: time="2024-11-12T20:53:48.602834639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:48.603924 containerd[1465]: time="2024-11-12T20:53:48.603519627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.603924 containerd[1465]: time="2024-11-12T20:53:48.603608384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:48.624647 systemd[1]: Started cri-containerd-bf6137ea740f290e1b6dbaf724bd9fa5984cb2464443f0753964c118c37785b9.scope - libcontainer container bf6137ea740f290e1b6dbaf724bd9fa5984cb2464443f0753964c118c37785b9. Nov 12 20:53:48.630183 systemd[1]: Started cri-containerd-39f3cf6880310b42a58ce8f05ce8a73f5f01bf19f5cd26dddcfd744bc8016bf4.scope - libcontainer container 39f3cf6880310b42a58ce8f05ce8a73f5f01bf19f5cd26dddcfd744bc8016bf4. Nov 12 20:53:48.632764 systemd[1]: Started cri-containerd-e565415e0c6e66e8754538c0d283946136e4a783bbf50de32941a256e55ec0b1.scope - libcontainer container e565415e0c6e66e8754538c0d283946136e4a783bbf50de32941a256e55ec0b1. Nov 12 20:53:48.671614 containerd[1465]: time="2024-11-12T20:53:48.671452238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19669745251020b21e8802350b4127f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf6137ea740f290e1b6dbaf724bd9fa5984cb2464443f0753964c118c37785b9\"" Nov 12 20:53:48.672573 kubelet[2150]: E1112 20:53:48.672417 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:48.676306 containerd[1465]: time="2024-11-12T20:53:48.676195188Z" level=info msg="CreateContainer within sandbox \"bf6137ea740f290e1b6dbaf724bd9fa5984cb2464443f0753964c118c37785b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:53:48.683192 containerd[1465]: time="2024-11-12T20:53:48.683140155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e565415e0c6e66e8754538c0d283946136e4a783bbf50de32941a256e55ec0b1\"" Nov 12 20:53:48.683319 containerd[1465]: time="2024-11-12T20:53:48.683260103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f3cf6880310b42a58ce8f05ce8a73f5f01bf19f5cd26dddcfd744bc8016bf4\"" Nov 12 20:53:48.684015 kubelet[2150]: E1112 20:53:48.683989 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:48.684263 kubelet[2150]: E1112 20:53:48.684173 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:48.685830 containerd[1465]: time="2024-11-12T20:53:48.685792186Z" level=info msg="CreateContainer within sandbox \"39f3cf6880310b42a58ce8f05ce8a73f5f01bf19f5cd26dddcfd744bc8016bf4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:53:48.686007 containerd[1465]: time="2024-11-12T20:53:48.685978575Z" level=info msg="CreateContainer within sandbox \"e565415e0c6e66e8754538c0d283946136e4a783bbf50de32941a256e55ec0b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:53:48.842623 kubelet[2150]: W1112 20:53:48.842406 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Nov 12 20:53:48.842623 kubelet[2150]: E1112 20:53:48.842536 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:53:49.039244 kubelet[2150]: E1112 20:53:49.039171 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Nov 12 20:53:49.116823 containerd[1465]: time="2024-11-12T20:53:49.116648196Z" level=info msg="CreateContainer within sandbox \"bf6137ea740f290e1b6dbaf724bd9fa5984cb2464443f0753964c118c37785b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61650ce4b26ba3135ec2fc5b3574add31f25f29e0ad2da32ba3b57523d423f68\"" Nov 12 20:53:49.117657 containerd[1465]: time="2024-11-12T20:53:49.117602943Z" level=info msg="StartContainer for \"61650ce4b26ba3135ec2fc5b3574add31f25f29e0ad2da32ba3b57523d423f68\"" Nov 12 20:53:49.127171 containerd[1465]: time="2024-11-12T20:53:49.127110049Z" level=info msg="CreateContainer within sandbox \"39f3cf6880310b42a58ce8f05ce8a73f5f01bf19f5cd26dddcfd744bc8016bf4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f08cbbee3e57213c5282753556571d87b7c1d2421f25c6ead312e3d7e5c6dd0a\"" Nov 12 20:53:49.127749 containerd[1465]: time="2024-11-12T20:53:49.127696168Z" level=info msg="StartContainer for \"f08cbbee3e57213c5282753556571d87b7c1d2421f25c6ead312e3d7e5c6dd0a\"" Nov 12 20:53:49.128048 containerd[1465]: time="2024-11-12T20:53:49.127956823Z" level=info msg="CreateContainer within sandbox \"e565415e0c6e66e8754538c0d283946136e4a783bbf50de32941a256e55ec0b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6078185f78c47fc658a2c0a1a87d921b9bac4598a6aeb1da689a491f0eabc6a9\"" Nov 12 20:53:49.128742 containerd[1465]: time="2024-11-12T20:53:49.128315182Z" level=info msg="StartContainer for \"6078185f78c47fc658a2c0a1a87d921b9bac4598a6aeb1da689a491f0eabc6a9\"" Nov 12 20:53:49.152063 systemd[1]: Started cri-containerd-61650ce4b26ba3135ec2fc5b3574add31f25f29e0ad2da32ba3b57523d423f68.scope - libcontainer container 61650ce4b26ba3135ec2fc5b3574add31f25f29e0ad2da32ba3b57523d423f68. Nov 12 20:53:49.170858 systemd[1]: Started cri-containerd-f08cbbee3e57213c5282753556571d87b7c1d2421f25c6ead312e3d7e5c6dd0a.scope - libcontainer container f08cbbee3e57213c5282753556571d87b7c1d2421f25c6ead312e3d7e5c6dd0a. Nov 12 20:53:49.180723 systemd[1]: Started cri-containerd-6078185f78c47fc658a2c0a1a87d921b9bac4598a6aeb1da689a491f0eabc6a9.scope - libcontainer container 6078185f78c47fc658a2c0a1a87d921b9bac4598a6aeb1da689a491f0eabc6a9. Nov 12 20:53:49.439065 containerd[1465]: time="2024-11-12T20:53:49.438893296Z" level=info msg="StartContainer for \"61650ce4b26ba3135ec2fc5b3574add31f25f29e0ad2da32ba3b57523d423f68\" returns successfully" Nov 12 20:53:49.439200 containerd[1465]: time="2024-11-12T20:53:49.439150575Z" level=info msg="StartContainer for \"f08cbbee3e57213c5282753556571d87b7c1d2421f25c6ead312e3d7e5c6dd0a\" returns successfully" Nov 12 20:53:49.439200 containerd[1465]: time="2024-11-12T20:53:49.439192096Z" level=info msg="StartContainer for \"6078185f78c47fc658a2c0a1a87d921b9bac4598a6aeb1da689a491f0eabc6a9\" returns successfully" Nov 12 20:53:50.070943 kubelet[2150]: E1112 20:53:50.070905 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:50.073134 kubelet[2150]: E1112 20:53:50.073115 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:50.074340 kubelet[2150]: E1112 20:53:50.074313 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:50.150902 kubelet[2150]: I1112 20:53:50.150874 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:50.672933 kubelet[2150]: I1112 20:53:50.672213 2150 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:53:50.672933 kubelet[2150]: E1112 20:53:50.672252 2150 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 12 20:53:50.686834 kubelet[2150]: E1112 20:53:50.686775 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:50.787695 kubelet[2150]: E1112 20:53:50.787607 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:50.888777 kubelet[2150]: E1112 20:53:50.888705 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:50.989864 kubelet[2150]: E1112 20:53:50.989647 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.077357 kubelet[2150]: E1112 20:53:51.077302 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:51.077357 kubelet[2150]: E1112 20:53:51.077313 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:51.077873 kubelet[2150]: E1112 20:53:51.077577 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:51.090765 kubelet[2150]: E1112 20:53:51.090679 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.191542 kubelet[2150]: E1112 20:53:51.191407 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.292353 kubelet[2150]: E1112 20:53:51.292174 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.392801 kubelet[2150]: E1112 20:53:51.392749 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.493466 kubelet[2150]: E1112 20:53:51.493413 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.594260 kubelet[2150]: E1112 20:53:51.594093 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.694318 kubelet[2150]: E1112 20:53:51.694231 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.795075 kubelet[2150]: E1112 20:53:51.795014 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.895287 kubelet[2150]: E1112 20:53:51.895119 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:51.995574 kubelet[2150]: E1112 20:53:51.995463 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.078054 kubelet[2150]: E1112 20:53:52.078014 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:52.095719 kubelet[2150]: E1112 20:53:52.095662 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.196466 kubelet[2150]: E1112 20:53:52.196313 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.296977 kubelet[2150]: E1112 20:53:52.296903 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.397917 kubelet[2150]: E1112 20:53:52.397815 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.499034 kubelet[2150]: E1112 20:53:52.498958 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:52.570771 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Nov 12 20:53:52.570788 systemd[1]: Reloading... Nov 12 20:53:52.686048 zram_generator::config[2467]: No configuration found. Nov 12 20:53:52.726712 update_engine[1453]: I20241112 20:53:52.726581 1453 update_attempter.cc:509] Updating boot flags... Nov 12 20:53:52.768535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2505) Nov 12 20:53:52.878528 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2504) Nov 12 20:53:52.886330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:52.892626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2504) Nov 12 20:53:53.008238 systemd[1]: Reloading finished in 436 ms. Nov 12 20:53:53.014882 kubelet[2150]: I1112 20:53:53.014827 2150 apiserver.go:52] "Watching apiserver" Nov 12 20:53:53.034228 kubelet[2150]: I1112 20:53:53.033989 2150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:53:53.111647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:53.129185 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:53:53.129557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:53.129615 systemd[1]: kubelet.service: Consumed 1.074s CPU time, 117.5M memory peak, 0B memory swap peak. Nov 12 20:53:53.138732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:53.320271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:53.330889 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:53:53.391209 kubelet[2527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:53.391209 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:53:53.391209 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:53.391678 kubelet[2527]: I1112 20:53:53.391608 2527 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:53:53.398412 kubelet[2527]: I1112 20:53:53.398348 2527 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:53:53.398412 kubelet[2527]: I1112 20:53:53.398399 2527 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:53:53.398754 kubelet[2527]: I1112 20:53:53.398727 2527 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:53:53.400235 kubelet[2527]: I1112 20:53:53.400210 2527 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:53:53.402524 kubelet[2527]: I1112 20:53:53.402496 2527 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:53:53.405504 kubelet[2527]: E1112 20:53:53.405453 2527 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:53:53.405504 kubelet[2527]: I1112 20:53:53.405498 2527 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:53:53.411332 kubelet[2527]: I1112 20:53:53.411300 2527 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:53:53.411457 kubelet[2527]: I1112 20:53:53.411427 2527 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:53:53.411685 kubelet[2527]: I1112 20:53:53.411638 2527 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:53:53.411875 kubelet[2527]: I1112 20:53:53.411677 2527 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:53:53.411952 kubelet[2527]: I1112 20:53:53.411881 2527 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:53:53.411952 kubelet[2527]: I1112 20:53:53.411892 2527 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:53:53.411952 kubelet[2527]: I1112 20:53:53.411930 2527 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:53.412103 kubelet[2527]: I1112 20:53:53.412083 2527 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:53:53.412103 kubelet[2527]: I1112 20:53:53.412103 2527 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:53:53.412206 kubelet[2527]: I1112 20:53:53.412146 2527 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:53:53.412206 kubelet[2527]: I1112 20:53:53.412181 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:53:53.412995 kubelet[2527]: I1112 20:53:53.412939 2527 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:53:53.414922 kubelet[2527]: I1112 20:53:53.413320 2527 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:53:53.414922 kubelet[2527]: I1112 20:53:53.413818 2527 server.go:1269] "Started kubelet" Nov 12 20:53:53.414922 kubelet[2527]: I1112 20:53:53.414281 2527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:53:53.414922 kubelet[2527]: I1112 20:53:53.414579 2527 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:53:53.414922 kubelet[2527]: I1112 20:53:53.414630 2527 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:53:53.418254 kubelet[2527]: I1112 20:53:53.415812 2527 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:53:53.418254 kubelet[2527]: I1112 20:53:53.415887 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:53:53.418254 kubelet[2527]: I1112 20:53:53.417278 2527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:53:53.418254 kubelet[2527]: E1112 20:53:53.418135 2527 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:53:53.418254 kubelet[2527]: I1112 20:53:53.418163 2527 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:53:53.418254 kubelet[2527]: I1112 20:53:53.418235 2527 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:53:53.418423 kubelet[2527]: I1112 20:53:53.418357 2527 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:53:53.422517 kubelet[2527]: I1112 20:53:53.422469 2527 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:53:53.423108 kubelet[2527]: I1112 20:53:53.423073 2527 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:53:53.424570 kubelet[2527]: E1112 20:53:53.424217 2527 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:53:53.425428 kubelet[2527]: I1112 20:53:53.425274 2527 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:53:53.434619 kubelet[2527]: I1112 20:53:53.434574 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:53:53.436636 kubelet[2527]: I1112 20:53:53.436576 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:53:53.436636 kubelet[2527]: I1112 20:53:53.436613 2527 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:53:53.436636 kubelet[2527]: I1112 20:53:53.436633 2527 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:53:53.436737 kubelet[2527]: E1112 20:53:53.436685 2527 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:53:53.461215 kubelet[2527]: I1112 20:53:53.461167 2527 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:53:53.461215 kubelet[2527]: I1112 20:53:53.461186 2527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:53:53.461215 kubelet[2527]: I1112 20:53:53.461204 2527 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:53.461421 kubelet[2527]: I1112 20:53:53.461342 2527 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:53:53.461421 kubelet[2527]: I1112 20:53:53.461353 2527 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:53:53.461421 kubelet[2527]: I1112 20:53:53.461370 2527 policy_none.go:49] "None policy: Start" Nov 12 20:53:53.462025 kubelet[2527]: I1112 20:53:53.462000 2527 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:53:53.462025 kubelet[2527]: I1112 20:53:53.462019 2527 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:53:53.462218 kubelet[2527]: I1112 20:53:53.462142 2527 state_mem.go:75] "Updated machine memory state" Nov 12 20:53:53.466217 kubelet[2527]: I1112 20:53:53.466173 2527 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:53:53.466445 kubelet[2527]: I1112 20:53:53.466379 2527 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:53:53.466445 kubelet[2527]: I1112 20:53:53.466395 2527 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:53:53.466570 kubelet[2527]: I1112 20:53:53.466557 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:53:53.574222 kubelet[2527]: I1112 20:53:53.574183 2527 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Nov 12 20:53:53.719973 kubelet[2527]: I1112 20:53:53.719806 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:53.720252 kubelet[2527]: I1112 20:53:53.719856 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:53.720409 kubelet[2527]: I1112 20:53:53.720301 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:53.720409 kubelet[2527]: I1112 20:53:53.720341 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:53.720409 kubelet[2527]: I1112 20:53:53.720363 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19669745251020b21e8802350b4127f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19669745251020b21e8802350b4127f3\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:53.720409 kubelet[2527]: I1112 20:53:53.720385 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:53.720409 kubelet[2527]: I1112 20:53:53.720405 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:53.720628 kubelet[2527]: I1112 20:53:53.720424 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:53.720628 kubelet[2527]: I1112 20:53:53.720443 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:53:53.720628 kubelet[2527]: I1112 20:53:53.719875 2527 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Nov 12 20:53:53.720628 kubelet[2527]: I1112 20:53:53.720578 2527 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Nov 12 20:53:53.773507 sudo[2563]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:53:53.773986 sudo[2563]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:53:54.013597 kubelet[2527]: E1112 20:53:54.013412 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:54.020972 kubelet[2527]: E1112 20:53:54.020845 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:54.021170 kubelet[2527]: E1112 20:53:54.021105 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:54.413549 kubelet[2527]: I1112 20:53:54.413290 2527 apiserver.go:52] "Watching apiserver" Nov 12 20:53:54.418659 kubelet[2527]: I1112 20:53:54.418587 2527 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:53:54.445858 kubelet[2527]: E1112 20:53:54.445759 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:54.481880 sudo[2563]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:54.819108 kubelet[2527]: E1112 20:53:54.819001 2527 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 12 20:53:54.819108 kubelet[2527]: E1112 20:53:54.819075 2527 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:53:54.820348 kubelet[2527]: E1112 20:53:54.819241 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:54.820348 kubelet[2527]: E1112 20:53:54.819289 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:55.040315 kubelet[2527]: I1112 20:53:55.040175 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.040154405 podStartE2EDuration="2.040154405s" podCreationTimestamp="2024-11-12 20:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:54.818821707 +0000 UTC m=+1.482784430" watchObservedRunningTime="2024-11-12 20:53:55.040154405 +0000 UTC m=+1.704117128" Nov 12 20:53:55.051608 kubelet[2527]: I1112 20:53:55.051512 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.051452618 podStartE2EDuration="2.051452618s" podCreationTimestamp="2024-11-12 20:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:55.040508013 +0000 UTC m=+1.704470756" watchObservedRunningTime="2024-11-12 20:53:55.051452618 +0000 UTC m=+1.715415341" Nov 12 20:53:55.051912 kubelet[2527]: I1112 20:53:55.051690 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.051674741 podStartE2EDuration="2.051674741s" podCreationTimestamp="2024-11-12 20:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:55.050132811 +0000 UTC m=+1.714095534" watchObservedRunningTime="2024-11-12 20:53:55.051674741 +0000 UTC m=+1.715637464" Nov 12 20:53:55.446984 kubelet[2527]: E1112 20:53:55.446945 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:55.447537 kubelet[2527]: E1112 20:53:55.447094 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:55.447537 kubelet[2527]: E1112 20:53:55.447111 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:56.374990 sudo[1642]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:56.377601 sshd[1639]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:56.382734 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:54512.service: Deactivated successfully. Nov 12 20:53:56.385035 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:53:56.385258 systemd[1]: session-7.scope: Consumed 6.064s CPU time, 155.5M memory peak, 0B memory swap peak. Nov 12 20:53:56.385798 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:53:56.387170 systemd-logind[1451]: Removed session 7. Nov 12 20:53:57.684016 kubelet[2527]: I1112 20:53:57.683974 2527 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:53:57.684464 kubelet[2527]: I1112 20:53:57.684441 2527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:53:57.684515 containerd[1465]: time="2024-11-12T20:53:57.684264377Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:53:58.605314 systemd[1]: Created slice kubepods-besteffort-pode8be1b98_ada4_4949_ab2e_878748f44718.slice - libcontainer container kubepods-besteffort-pode8be1b98_ada4_4949_ab2e_878748f44718.slice. Nov 12 20:53:58.622805 systemd[1]: Created slice kubepods-burstable-pod93bd25fa_a172_4014_badf_9010674439c3.slice - libcontainer container kubepods-burstable-pod93bd25fa_a172_4014_badf_9010674439c3.slice. Nov 12 20:53:58.648371 kubelet[2527]: I1112 20:53:58.648286 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-lib-modules\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648371 kubelet[2527]: I1112 20:53:58.648348 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8be1b98-ada4-4949-ab2e-878748f44718-kube-proxy\") pod \"kube-proxy-rpnfz\" (UID: \"e8be1b98-ada4-4949-ab2e-878748f44718\") " pod="kube-system/kube-proxy-rpnfz" Nov 12 20:53:58.648371 kubelet[2527]: I1112 20:53:58.648373 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-cgroup\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648398 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bd25fa-a172-4014-badf-9010674439c3-clustermesh-secrets\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648421 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-net\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648461 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-kernel\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648568 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8be1b98-ada4-4949-ab2e-878748f44718-lib-modules\") pod \"kube-proxy-rpnfz\" (UID: \"e8be1b98-ada4-4949-ab2e-878748f44718\") " pod="kube-system/kube-proxy-rpnfz" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648617 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-bpf-maps\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648664 kubelet[2527]: I1112 20:53:58.648636 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-hubble-tls\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648652 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8be1b98-ada4-4949-ab2e-878748f44718-xtables-lock\") pod \"kube-proxy-rpnfz\" (UID: \"e8be1b98-ada4-4949-ab2e-878748f44718\") " pod="kube-system/kube-proxy-rpnfz" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648671 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bd25fa-a172-4014-badf-9010674439c3-cilium-config-path\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648685 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cni-path\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648714 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-xtables-lock\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648735 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-run\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.648859 kubelet[2527]: I1112 20:53:58.648753 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xt2m\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-kube-api-access-9xt2m\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.649044 kubelet[2527]: I1112 20:53:58.648775 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrh6\" (UniqueName: \"kubernetes.io/projected/e8be1b98-ada4-4949-ab2e-878748f44718-kube-api-access-rmrh6\") pod \"kube-proxy-rpnfz\" (UID: \"e8be1b98-ada4-4949-ab2e-878748f44718\") " pod="kube-system/kube-proxy-rpnfz" Nov 12 20:53:58.649044 kubelet[2527]: I1112 20:53:58.648795 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-hostproc\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.649044 kubelet[2527]: I1112 20:53:58.648812 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-etc-cni-netd\") pod \"cilium-gpjr8\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " pod="kube-system/cilium-gpjr8" Nov 12 20:53:58.708939 systemd[1]: Created slice kubepods-besteffort-pod7bfa9b3c_019f_40bc_88f8_32f7332c08fe.slice - libcontainer container kubepods-besteffort-pod7bfa9b3c_019f_40bc_88f8_32f7332c08fe.slice. Nov 12 20:53:58.749459 kubelet[2527]: I1112 20:53:58.749397 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7v56\" (UniqueName: \"kubernetes.io/projected/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-kube-api-access-q7v56\") pod \"cilium-operator-5d85765b45-rkdc4\" (UID: \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\") " pod="kube-system/cilium-operator-5d85765b45-rkdc4" Nov 12 20:53:58.749949 kubelet[2527]: I1112 20:53:58.749522 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-cilium-config-path\") pod \"cilium-operator-5d85765b45-rkdc4\" (UID: \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\") " pod="kube-system/cilium-operator-5d85765b45-rkdc4" Nov 12 20:53:58.919995 kubelet[2527]: E1112 20:53:58.919836 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:58.921547 containerd[1465]: time="2024-11-12T20:53:58.921500857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpnfz,Uid:e8be1b98-ada4-4949-ab2e-878748f44718,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:58.927751 kubelet[2527]: E1112 20:53:58.927709 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:58.928797 containerd[1465]: time="2024-11-12T20:53:58.928760389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpjr8,Uid:93bd25fa-a172-4014-badf-9010674439c3,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:59.012284 kubelet[2527]: E1112 20:53:59.012225 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.012893 containerd[1465]: time="2024-11-12T20:53:59.012831515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rkdc4,Uid:7bfa9b3c-019f-40bc-88f8-32f7332c08fe,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:59.217928 kubelet[2527]: E1112 20:53:59.217885 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.455856 kubelet[2527]: E1112 20:53:59.455818 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.606560 containerd[1465]: time="2024-11-12T20:53:59.606273068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:59.606560 containerd[1465]: time="2024-11-12T20:53:59.606360848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:59.606560 containerd[1465]: time="2024-11-12T20:53:59.606378923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.606847 containerd[1465]: time="2024-11-12T20:53:59.606522710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.619665 containerd[1465]: time="2024-11-12T20:53:59.619433901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:59.620381 containerd[1465]: time="2024-11-12T20:53:59.619671088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:59.620381 containerd[1465]: time="2024-11-12T20:53:59.619689855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.620381 containerd[1465]: time="2024-11-12T20:53:59.620126577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.635435 containerd[1465]: time="2024-11-12T20:53:59.635050922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:59.635435 containerd[1465]: time="2024-11-12T20:53:59.635174661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:59.635435 containerd[1465]: time="2024-11-12T20:53:59.635194870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.635718 systemd[1]: Started cri-containerd-70bedf9488f0447e6c32ff2d53d7828dd9377833e5b44319d36a8666efd11056.scope - libcontainer container 70bedf9488f0447e6c32ff2d53d7828dd9377833e5b44319d36a8666efd11056. Nov 12 20:53:59.636885 containerd[1465]: time="2024-11-12T20:53:59.636215088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:59.645398 systemd[1]: Started cri-containerd-e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747.scope - libcontainer container e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747. Nov 12 20:53:59.667740 systemd[1]: Started cri-containerd-bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171.scope - libcontainer container bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171. Nov 12 20:53:59.671428 containerd[1465]: time="2024-11-12T20:53:59.671370696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpnfz,Uid:e8be1b98-ada4-4949-ab2e-878748f44718,Namespace:kube-system,Attempt:0,} returns sandbox id \"70bedf9488f0447e6c32ff2d53d7828dd9377833e5b44319d36a8666efd11056\"" Nov 12 20:53:59.672275 kubelet[2527]: E1112 20:53:59.672204 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.676630 containerd[1465]: time="2024-11-12T20:53:59.676550410Z" level=info msg="CreateContainer within sandbox \"70bedf9488f0447e6c32ff2d53d7828dd9377833e5b44319d36a8666efd11056\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:53:59.698788 containerd[1465]: time="2024-11-12T20:53:59.698727148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpjr8,Uid:93bd25fa-a172-4014-badf-9010674439c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\"" Nov 12 20:53:59.699365 kubelet[2527]: E1112 20:53:59.699334 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.704207 containerd[1465]: time="2024-11-12T20:53:59.704169679Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:53:59.708471 containerd[1465]: time="2024-11-12T20:53:59.708422263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rkdc4,Uid:7bfa9b3c-019f-40bc-88f8-32f7332c08fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\"" Nov 12 20:53:59.709100 kubelet[2527]: E1112 20:53:59.709061 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:53:59.746814 containerd[1465]: time="2024-11-12T20:53:59.746755433Z" level=info msg="CreateContainer within sandbox \"70bedf9488f0447e6c32ff2d53d7828dd9377833e5b44319d36a8666efd11056\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39c1acd00a77bc84f0a485da047a17b0f7d35c5ebeadac4dee5f266abe604bed\"" Nov 12 20:53:59.747379 containerd[1465]: time="2024-11-12T20:53:59.747336063Z" level=info msg="StartContainer for \"39c1acd00a77bc84f0a485da047a17b0f7d35c5ebeadac4dee5f266abe604bed\"" Nov 12 20:53:59.785662 systemd[1]: Started cri-containerd-39c1acd00a77bc84f0a485da047a17b0f7d35c5ebeadac4dee5f266abe604bed.scope - libcontainer container 39c1acd00a77bc84f0a485da047a17b0f7d35c5ebeadac4dee5f266abe604bed. Nov 12 20:53:59.935027 containerd[1465]: time="2024-11-12T20:53:59.934887160Z" level=info msg="StartContainer for \"39c1acd00a77bc84f0a485da047a17b0f7d35c5ebeadac4dee5f266abe604bed\" returns successfully" Nov 12 20:54:00.460653 kubelet[2527]: E1112 20:54:00.460527 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:04.630903 kubelet[2527]: E1112 20:54:04.630861 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:04.641690 kubelet[2527]: I1112 20:54:04.641626 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rpnfz" podStartSLOduration=6.641606536 podStartE2EDuration="6.641606536s" podCreationTimestamp="2024-11-12 20:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:00.468466629 +0000 UTC m=+7.132429353" watchObservedRunningTime="2024-11-12 20:54:04.641606536 +0000 UTC m=+11.305569259" Nov 12 20:54:04.971405 kubelet[2527]: E1112 20:54:04.971318 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:12.293074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515266047.mount: Deactivated successfully. Nov 12 20:54:15.990616 containerd[1465]: time="2024-11-12T20:54:15.990541107Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:15.991376 containerd[1465]: time="2024-11-12T20:54:15.991332005Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735363" Nov 12 20:54:15.992612 containerd[1465]: time="2024-11-12T20:54:15.992584875Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:15.994196 containerd[1465]: time="2024-11-12T20:54:15.994161009Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.289946236s" Nov 12 20:54:15.994249 containerd[1465]: time="2024-11-12T20:54:15.994200168Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:54:15.995114 containerd[1465]: time="2024-11-12T20:54:15.995086526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:54:15.996078 containerd[1465]: time="2024-11-12T20:54:15.996050672Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:54:16.010909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365880437.mount: Deactivated successfully. Nov 12 20:54:16.013962 containerd[1465]: time="2024-11-12T20:54:16.013921819Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\"" Nov 12 20:54:16.015134 containerd[1465]: time="2024-11-12T20:54:16.015105419Z" level=info msg="StartContainer for \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\"" Nov 12 20:54:16.047629 systemd[1]: Started cri-containerd-9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d.scope - libcontainer container 9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d. Nov 12 20:54:16.074756 containerd[1465]: time="2024-11-12T20:54:16.074686814Z" level=info msg="StartContainer for \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\" returns successfully" Nov 12 20:54:16.086199 systemd[1]: cri-containerd-9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d.scope: Deactivated successfully. Nov 12 20:54:16.432507 containerd[1465]: time="2024-11-12T20:54:16.432323529Z" level=info msg="shim disconnected" id=9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d namespace=k8s.io Nov 12 20:54:16.432507 containerd[1465]: time="2024-11-12T20:54:16.432389888Z" level=warning msg="cleaning up after shim disconnected" id=9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d namespace=k8s.io Nov 12 20:54:16.432507 containerd[1465]: time="2024-11-12T20:54:16.432400867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:16.976611 kubelet[2527]: E1112 20:54:16.976553 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:16.979135 containerd[1465]: time="2024-11-12T20:54:16.979083758Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:54:16.994626 containerd[1465]: time="2024-11-12T20:54:16.994572291Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\"" Nov 12 20:54:16.995297 containerd[1465]: time="2024-11-12T20:54:16.995141335Z" level=info msg="StartContainer for \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\"" Nov 12 20:54:17.008845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d-rootfs.mount: Deactivated successfully. Nov 12 20:54:17.021709 systemd[1]: Started cri-containerd-072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa.scope - libcontainer container 072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa. Nov 12 20:54:17.047734 containerd[1465]: time="2024-11-12T20:54:17.047686791Z" level=info msg="StartContainer for \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\" returns successfully" Nov 12 20:54:17.060026 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:54:17.060276 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:17.060352 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:17.065792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:17.065994 systemd[1]: cri-containerd-072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa.scope: Deactivated successfully. Nov 12 20:54:17.084438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa-rootfs.mount: Deactivated successfully. Nov 12 20:54:17.092257 containerd[1465]: time="2024-11-12T20:54:17.092187305Z" level=info msg="shim disconnected" id=072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa namespace=k8s.io Nov 12 20:54:17.092257 containerd[1465]: time="2024-11-12T20:54:17.092249305Z" level=warning msg="cleaning up after shim disconnected" id=072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa namespace=k8s.io Nov 12 20:54:17.092257 containerd[1465]: time="2024-11-12T20:54:17.092258702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:17.094349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:17.981360 kubelet[2527]: E1112 20:54:17.981320 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.983549 containerd[1465]: time="2024-11-12T20:54:17.983506769Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:54:18.233807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551805572.mount: Deactivated successfully. Nov 12 20:54:18.482604 containerd[1465]: time="2024-11-12T20:54:18.482536321Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\"" Nov 12 20:54:18.483123 containerd[1465]: time="2024-11-12T20:54:18.483081475Z" level=info msg="StartContainer for \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\"" Nov 12 20:54:18.508590 systemd[1]: run-containerd-runc-k8s.io-68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f-runc.sJvoQd.mount: Deactivated successfully. Nov 12 20:54:18.526726 systemd[1]: Started cri-containerd-68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f.scope - libcontainer container 68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f. Nov 12 20:54:18.557715 systemd[1]: cri-containerd-68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f.scope: Deactivated successfully. Nov 12 20:54:18.616175 containerd[1465]: time="2024-11-12T20:54:18.616089335Z" level=info msg="StartContainer for \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\" returns successfully" Nov 12 20:54:18.646447 containerd[1465]: time="2024-11-12T20:54:18.646354937Z" level=info msg="shim disconnected" id=68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f namespace=k8s.io Nov 12 20:54:18.646447 containerd[1465]: time="2024-11-12T20:54:18.646431333Z" level=warning msg="cleaning up after shim disconnected" id=68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f namespace=k8s.io Nov 12 20:54:18.646447 containerd[1465]: time="2024-11-12T20:54:18.646441432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:18.984938 kubelet[2527]: E1112 20:54:18.984904 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:18.986974 containerd[1465]: time="2024-11-12T20:54:18.986934692Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:54:19.006746 containerd[1465]: time="2024-11-12T20:54:19.006692047Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\"" Nov 12 20:54:19.007327 containerd[1465]: time="2024-11-12T20:54:19.007303380Z" level=info msg="StartContainer for \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\"" Nov 12 20:54:19.036648 systemd[1]: Started cri-containerd-87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8.scope - libcontainer container 87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8. Nov 12 20:54:19.064002 systemd[1]: cri-containerd-87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8.scope: Deactivated successfully. Nov 12 20:54:19.066914 containerd[1465]: time="2024-11-12T20:54:19.066870457Z" level=info msg="StartContainer for \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\" returns successfully" Nov 12 20:54:19.093130 containerd[1465]: time="2024-11-12T20:54:19.093058425Z" level=info msg="shim disconnected" id=87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8 namespace=k8s.io Nov 12 20:54:19.093130 containerd[1465]: time="2024-11-12T20:54:19.093122349Z" level=warning msg="cleaning up after shim disconnected" id=87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8 namespace=k8s.io Nov 12 20:54:19.093130 containerd[1465]: time="2024-11-12T20:54:19.093131596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:19.230390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f-rootfs.mount: Deactivated successfully. Nov 12 20:54:19.989809 kubelet[2527]: E1112 20:54:19.989706 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:19.992403 containerd[1465]: time="2024-11-12T20:54:19.992349179Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:54:20.013728 containerd[1465]: time="2024-11-12T20:54:20.013658375Z" level=info msg="CreateContainer within sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\"" Nov 12 20:54:20.014372 containerd[1465]: time="2024-11-12T20:54:20.014340837Z" level=info msg="StartContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\"" Nov 12 20:54:20.053734 systemd[1]: Started cri-containerd-3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb.scope - libcontainer container 3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb. Nov 12 20:54:20.090348 containerd[1465]: time="2024-11-12T20:54:20.090286434Z" level=info msg="StartContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" returns successfully" Nov 12 20:54:20.306803 kubelet[2527]: I1112 20:54:20.306618 2527 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:54:20.527142 systemd[1]: Created slice kubepods-burstable-podb3630ea2_9ffc_4d57_94c0_56b059e53398.slice - libcontainer container kubepods-burstable-podb3630ea2_9ffc_4d57_94c0_56b059e53398.slice. Nov 12 20:54:20.536139 systemd[1]: Created slice kubepods-burstable-pod71f16fe8_2c95_45fa_a7ea_3a5d28730535.slice - libcontainer container kubepods-burstable-pod71f16fe8_2c95_45fa_a7ea_3a5d28730535.slice. Nov 12 20:54:20.585576 kubelet[2527]: I1112 20:54:20.585409 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71f16fe8-2c95-45fa-a7ea-3a5d28730535-config-volume\") pod \"coredns-6f6b679f8f-hslfb\" (UID: \"71f16fe8-2c95-45fa-a7ea-3a5d28730535\") " pod="kube-system/coredns-6f6b679f8f-hslfb" Nov 12 20:54:20.585576 kubelet[2527]: I1112 20:54:20.585452 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2vr\" (UniqueName: \"kubernetes.io/projected/71f16fe8-2c95-45fa-a7ea-3a5d28730535-kube-api-access-bw2vr\") pod \"coredns-6f6b679f8f-hslfb\" (UID: \"71f16fe8-2c95-45fa-a7ea-3a5d28730535\") " pod="kube-system/coredns-6f6b679f8f-hslfb" Nov 12 20:54:20.585576 kubelet[2527]: I1112 20:54:20.585491 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3630ea2-9ffc-4d57-94c0-56b059e53398-config-volume\") pod \"coredns-6f6b679f8f-xq7z5\" (UID: \"b3630ea2-9ffc-4d57-94c0-56b059e53398\") " pod="kube-system/coredns-6f6b679f8f-xq7z5" Nov 12 20:54:20.585576 kubelet[2527]: I1112 20:54:20.585512 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4lv\" (UniqueName: \"kubernetes.io/projected/b3630ea2-9ffc-4d57-94c0-56b059e53398-kube-api-access-rh4lv\") pod \"coredns-6f6b679f8f-xq7z5\" (UID: \"b3630ea2-9ffc-4d57-94c0-56b059e53398\") " pod="kube-system/coredns-6f6b679f8f-xq7z5" Nov 12 20:54:20.831867 kubelet[2527]: E1112 20:54:20.831814 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:20.832951 containerd[1465]: time="2024-11-12T20:54:20.832906599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xq7z5,Uid:b3630ea2-9ffc-4d57-94c0-56b059e53398,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:20.841071 kubelet[2527]: E1112 20:54:20.840912 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:20.842179 containerd[1465]: time="2024-11-12T20:54:20.841753623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hslfb,Uid:71f16fe8-2c95-45fa-a7ea-3a5d28730535,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:20.994850 kubelet[2527]: E1112 20:54:20.994735 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.247708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860257265.mount: Deactivated successfully. Nov 12 20:54:21.762180 containerd[1465]: time="2024-11-12T20:54:21.762101358Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:21.763075 containerd[1465]: time="2024-11-12T20:54:21.762996585Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907253" Nov 12 20:54:21.764754 containerd[1465]: time="2024-11-12T20:54:21.764726298Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:21.766600 containerd[1465]: time="2024-11-12T20:54:21.766561521Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.771440914s" Nov 12 20:54:21.766686 containerd[1465]: time="2024-11-12T20:54:21.766603447Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:54:21.772498 containerd[1465]: time="2024-11-12T20:54:21.772434689Z" level=info msg="CreateContainer within sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:54:21.790567 containerd[1465]: time="2024-11-12T20:54:21.790501747Z" level=info msg="CreateContainer within sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\"" Nov 12 20:54:21.791276 containerd[1465]: time="2024-11-12T20:54:21.791236125Z" level=info msg="StartContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\"" Nov 12 20:54:21.823747 systemd[1]: Started cri-containerd-c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589.scope - libcontainer container c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589. Nov 12 20:54:21.852338 containerd[1465]: time="2024-11-12T20:54:21.852176299Z" level=info msg="StartContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" returns successfully" Nov 12 20:54:21.997147 kubelet[2527]: E1112 20:54:21.997109 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.997690 kubelet[2527]: E1112 20:54:21.997369 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:22.359840 kubelet[2527]: I1112 20:54:22.359745 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gpjr8" podStartSLOduration=8.067303772 podStartE2EDuration="24.359722626s" podCreationTimestamp="2024-11-12 20:53:58 +0000 UTC" firstStartedPulling="2024-11-12 20:53:59.702538081 +0000 UTC m=+6.366500804" lastFinishedPulling="2024-11-12 20:54:15.994956935 +0000 UTC m=+22.658919658" observedRunningTime="2024-11-12 20:54:21.013415172 +0000 UTC m=+27.677377915" watchObservedRunningTime="2024-11-12 20:54:22.359722626 +0000 UTC m=+29.023685350" Nov 12 20:54:22.999496 kubelet[2527]: E1112 20:54:22.999435 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:23.000123 kubelet[2527]: E1112 20:54:22.999607 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:25.556727 systemd-networkd[1405]: cilium_host: Link UP Nov 12 20:54:25.557035 systemd-networkd[1405]: cilium_net: Link UP Nov 12 20:54:25.557314 systemd-networkd[1405]: cilium_net: Gained carrier Nov 12 20:54:25.557613 systemd-networkd[1405]: cilium_host: Gained carrier Nov 12 20:54:25.619602 systemd-networkd[1405]: cilium_net: Gained IPv6LL Nov 12 20:54:25.724670 systemd-networkd[1405]: cilium_vxlan: Link UP Nov 12 20:54:25.724682 systemd-networkd[1405]: cilium_vxlan: Gained carrier Nov 12 20:54:25.786788 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:49044.service - OpenSSH per-connection server daemon (10.0.0.1:49044). Nov 12 20:54:25.807670 systemd-networkd[1405]: cilium_host: Gained IPv6LL Nov 12 20:54:25.838062 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 49044 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:25.840339 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:25.847302 systemd-logind[1451]: New session 8 of user core. Nov 12 20:54:25.857821 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:54:25.963562 kernel: NET: Registered PF_ALG protocol family Nov 12 20:54:26.008646 sshd[3452]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:26.012845 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:49044.service: Deactivated successfully. Nov 12 20:54:26.014978 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:54:26.015752 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:54:26.016823 systemd-logind[1451]: Removed session 8. Nov 12 20:54:26.671527 systemd-networkd[1405]: lxc_health: Link UP Nov 12 20:54:26.683020 systemd-networkd[1405]: lxc_health: Gained carrier Nov 12 20:54:26.929524 kubelet[2527]: E1112 20:54:26.929329 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:26.943037 systemd-networkd[1405]: lxcd65df021e294: Link UP Nov 12 20:54:26.950770 kubelet[2527]: I1112 20:54:26.949977 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rkdc4" podStartSLOduration=6.892205859 podStartE2EDuration="28.949960559s" podCreationTimestamp="2024-11-12 20:53:58 +0000 UTC" firstStartedPulling="2024-11-12 20:53:59.709513038 +0000 UTC m=+6.373475761" lastFinishedPulling="2024-11-12 20:54:21.767267738 +0000 UTC m=+28.431230461" observedRunningTime="2024-11-12 20:54:22.360458338 +0000 UTC m=+29.024421061" watchObservedRunningTime="2024-11-12 20:54:26.949960559 +0000 UTC m=+33.613923282" Nov 12 20:54:26.955870 systemd-networkd[1405]: lxc2077c7261331: Link UP Nov 12 20:54:26.963741 kernel: eth0: renamed from tmpfe81a Nov 12 20:54:26.969530 kernel: eth0: renamed from tmpee3dd Nov 12 20:54:26.976859 systemd-networkd[1405]: lxcd65df021e294: Gained carrier Nov 12 20:54:26.978876 systemd-networkd[1405]: lxc2077c7261331: Gained carrier Nov 12 20:54:27.053720 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL Nov 12 20:54:28.717939 systemd-networkd[1405]: lxc_health: Gained IPv6LL Nov 12 20:54:28.718385 systemd-networkd[1405]: lxc2077c7261331: Gained IPv6LL Nov 12 20:54:28.873426 kubelet[2527]: I1112 20:54:28.873330 2527 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:28.874286 kubelet[2527]: E1112 20:54:28.873932 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:28.909866 systemd-networkd[1405]: lxcd65df021e294: Gained IPv6LL Nov 12 20:54:29.011855 kubelet[2527]: E1112 20:54:29.011664 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:30.599689 containerd[1465]: time="2024-11-12T20:54:30.599595398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:30.599689 containerd[1465]: time="2024-11-12T20:54:30.599691713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:30.600648 containerd[1465]: time="2024-11-12T20:54:30.599722509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:30.600648 containerd[1465]: time="2024-11-12T20:54:30.599816740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:30.600878 containerd[1465]: time="2024-11-12T20:54:30.600746484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:30.600878 containerd[1465]: time="2024-11-12T20:54:30.600840925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:30.600878 containerd[1465]: time="2024-11-12T20:54:30.600859068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:30.601010 containerd[1465]: time="2024-11-12T20:54:30.600973005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:30.632644 systemd[1]: Started cri-containerd-ee3ddf174478cb248f1db1e2a1d3d64c066546afedd3561962e261cccbd4059f.scope - libcontainer container ee3ddf174478cb248f1db1e2a1d3d64c066546afedd3561962e261cccbd4059f. Nov 12 20:54:30.634697 systemd[1]: Started cri-containerd-fe81a8ced5f54c146f3193ca4f1707ad783037fac2f7c0873cd53a7aa2713e30.scope - libcontainer container fe81a8ced5f54c146f3193ca4f1707ad783037fac2f7c0873cd53a7aa2713e30. Nov 12 20:54:30.648279 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:30.650863 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:54:30.676185 containerd[1465]: time="2024-11-12T20:54:30.676106722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hslfb,Uid:71f16fe8-2c95-45fa-a7ea-3a5d28730535,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee3ddf174478cb248f1db1e2a1d3d64c066546afedd3561962e261cccbd4059f\"" Nov 12 20:54:30.676981 kubelet[2527]: E1112 20:54:30.676948 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:30.679537 containerd[1465]: time="2024-11-12T20:54:30.679461570Z" level=info msg="CreateContainer within sandbox \"ee3ddf174478cb248f1db1e2a1d3d64c066546afedd3561962e261cccbd4059f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:54:30.686389 containerd[1465]: time="2024-11-12T20:54:30.686310879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xq7z5,Uid:b3630ea2-9ffc-4d57-94c0-56b059e53398,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe81a8ced5f54c146f3193ca4f1707ad783037fac2f7c0873cd53a7aa2713e30\"" Nov 12 20:54:30.687301 kubelet[2527]: E1112 20:54:30.687223 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:30.689859 containerd[1465]: time="2024-11-12T20:54:30.689774145Z" level=info msg="CreateContainer within sandbox \"fe81a8ced5f54c146f3193ca4f1707ad783037fac2f7c0873cd53a7aa2713e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:54:30.716271 containerd[1465]: time="2024-11-12T20:54:30.716204471Z" level=info msg="CreateContainer within sandbox \"fe81a8ced5f54c146f3193ca4f1707ad783037fac2f7c0873cd53a7aa2713e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39087e321667aa2d79c17bedad37f4dd4a1f7776e74600517cc845c735fdac95\"" Nov 12 20:54:30.717044 containerd[1465]: time="2024-11-12T20:54:30.716917302Z" level=info msg="StartContainer for \"39087e321667aa2d79c17bedad37f4dd4a1f7776e74600517cc845c735fdac95\"" Nov 12 20:54:30.731972 containerd[1465]: time="2024-11-12T20:54:30.731899427Z" level=info msg="CreateContainer within sandbox \"ee3ddf174478cb248f1db1e2a1d3d64c066546afedd3561962e261cccbd4059f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4735e909389e7dc281d1c6ea394ba93f5129a8d995cb87e3888473090faa9b63\"" Nov 12 20:54:30.733073 containerd[1465]: time="2024-11-12T20:54:30.733002305Z" level=info msg="StartContainer for \"4735e909389e7dc281d1c6ea394ba93f5129a8d995cb87e3888473090faa9b63\"" Nov 12 20:54:30.755676 systemd[1]: Started cri-containerd-39087e321667aa2d79c17bedad37f4dd4a1f7776e74600517cc845c735fdac95.scope - libcontainer container 39087e321667aa2d79c17bedad37f4dd4a1f7776e74600517cc845c735fdac95. Nov 12 20:54:30.771873 systemd[1]: Started cri-containerd-4735e909389e7dc281d1c6ea394ba93f5129a8d995cb87e3888473090faa9b63.scope - libcontainer container 4735e909389e7dc281d1c6ea394ba93f5129a8d995cb87e3888473090faa9b63. Nov 12 20:54:30.925957 containerd[1465]: time="2024-11-12T20:54:30.925793984Z" level=info msg="StartContainer for \"39087e321667aa2d79c17bedad37f4dd4a1f7776e74600517cc845c735fdac95\" returns successfully" Nov 12 20:54:30.925957 containerd[1465]: time="2024-11-12T20:54:30.925921515Z" level=info msg="StartContainer for \"4735e909389e7dc281d1c6ea394ba93f5129a8d995cb87e3888473090faa9b63\" returns successfully" Nov 12 20:54:31.029287 kubelet[2527]: E1112 20:54:31.029240 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:31.034787 kubelet[2527]: E1112 20:54:31.034738 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:31.041043 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:49056.service - OpenSSH per-connection server daemon (10.0.0.1:49056). Nov 12 20:54:31.080634 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 49056 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:31.083032 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:31.087922 systemd-logind[1451]: New session 9 of user core. Nov 12 20:54:31.100637 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:54:31.148752 kubelet[2527]: I1112 20:54:31.148659 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hslfb" podStartSLOduration=33.148638179 podStartE2EDuration="33.148638179s" podCreationTimestamp="2024-11-12 20:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:31.14697954 +0000 UTC m=+37.810942274" watchObservedRunningTime="2024-11-12 20:54:31.148638179 +0000 UTC m=+37.812600902" Nov 12 20:54:31.294817 kubelet[2527]: I1112 20:54:31.294736 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xq7z5" podStartSLOduration=33.294697019 podStartE2EDuration="33.294697019s" podCreationTimestamp="2024-11-12 20:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:31.293725737 +0000 UTC m=+37.957688460" watchObservedRunningTime="2024-11-12 20:54:31.294697019 +0000 UTC m=+37.958659742" Nov 12 20:54:31.303446 sshd[3910]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:31.309699 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:49056.service: Deactivated successfully. Nov 12 20:54:31.312366 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:54:31.313165 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:54:31.314627 systemd-logind[1451]: Removed session 9. Nov 12 20:54:31.605448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117619955.mount: Deactivated successfully. Nov 12 20:54:32.037079 kubelet[2527]: E1112 20:54:32.036790 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:32.037079 kubelet[2527]: E1112 20:54:32.036835 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.038553 kubelet[2527]: E1112 20:54:33.038513 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.039115 kubelet[2527]: E1112 20:54:33.038649 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:36.315768 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:48810.service - OpenSSH per-connection server daemon (10.0.0.1:48810). Nov 12 20:54:36.373996 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 48810 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:36.376384 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:36.381853 systemd-logind[1451]: New session 10 of user core. Nov 12 20:54:36.392788 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:54:36.559810 sshd[3956]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:36.564546 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:48810.service: Deactivated successfully. Nov 12 20:54:36.567242 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:54:36.568169 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:54:36.569088 systemd-logind[1451]: Removed session 10. Nov 12 20:54:41.571642 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:48826.service - OpenSSH per-connection server daemon (10.0.0.1:48826). Nov 12 20:54:41.612091 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 48826 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:41.614067 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:41.618669 systemd-logind[1451]: New session 11 of user core. Nov 12 20:54:41.625657 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:54:41.739520 sshd[3972]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:41.744523 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:48826.service: Deactivated successfully. Nov 12 20:54:41.746518 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:54:41.747221 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:54:41.748319 systemd-logind[1451]: Removed session 11. Nov 12 20:54:46.755228 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:57954.service - OpenSSH per-connection server daemon (10.0.0.1:57954). Nov 12 20:54:46.797773 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 57954 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:46.799815 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:46.804351 systemd-logind[1451]: New session 12 of user core. Nov 12 20:54:46.811763 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:54:46.938607 sshd[3988]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:46.943957 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:57954.service: Deactivated successfully. Nov 12 20:54:46.947260 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:54:46.948088 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:54:46.949414 systemd-logind[1451]: Removed session 12. Nov 12 20:54:51.953091 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:57958.service - OpenSSH per-connection server daemon (10.0.0.1:57958). Nov 12 20:54:52.044227 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:52.046384 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:52.051358 systemd-logind[1451]: New session 13 of user core. Nov 12 20:54:52.060775 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:54:52.211383 sshd[4003]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:52.226152 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:57958.service: Deactivated successfully. Nov 12 20:54:52.228307 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:54:52.229893 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:54:52.232286 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:57964.service - OpenSSH per-connection server daemon (10.0.0.1:57964). Nov 12 20:54:52.233287 systemd-logind[1451]: Removed session 13. Nov 12 20:54:52.272401 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 57964 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:52.274467 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:52.279040 systemd-logind[1451]: New session 14 of user core. Nov 12 20:54:52.290619 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:54:52.525002 sshd[4019]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:52.537534 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:57964.service: Deactivated successfully. Nov 12 20:54:52.539556 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:54:52.541534 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:54:52.558074 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:57980.service - OpenSSH per-connection server daemon (10.0.0.1:57980). Nov 12 20:54:52.560839 systemd-logind[1451]: Removed session 14. Nov 12 20:54:52.601934 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 57980 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:52.604064 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:52.609070 systemd-logind[1451]: New session 15 of user core. Nov 12 20:54:52.622723 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:54:52.753424 sshd[4031]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:52.757229 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:57980.service: Deactivated successfully. Nov 12 20:54:52.759146 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:54:52.759755 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:54:52.760599 systemd-logind[1451]: Removed session 15. Nov 12 20:54:57.767697 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:50396.service - OpenSSH per-connection server daemon (10.0.0.1:50396). Nov 12 20:54:57.811398 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 50396 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:57.813343 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:57.817948 systemd-logind[1451]: New session 16 of user core. Nov 12 20:54:57.824748 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:54:57.940594 sshd[4047]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:57.945037 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:50396.service: Deactivated successfully. Nov 12 20:54:57.947660 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:54:57.948778 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:54:57.950000 systemd-logind[1451]: Removed session 16. Nov 12 20:55:02.952581 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:50408.service - OpenSSH per-connection server daemon (10.0.0.1:50408). Nov 12 20:55:02.999087 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 50408 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:03.003467 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:03.009158 systemd-logind[1451]: New session 17 of user core. Nov 12 20:55:03.018632 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:55:03.136992 sshd[4064]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:03.149297 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:50408.service: Deactivated successfully. Nov 12 20:55:03.151084 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:55:03.152667 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:55:03.154121 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:50420.service - OpenSSH per-connection server daemon (10.0.0.1:50420). Nov 12 20:55:03.155044 systemd-logind[1451]: Removed session 17. Nov 12 20:55:03.194726 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 50420 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:03.196465 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:03.200757 systemd-logind[1451]: New session 18 of user core. Nov 12 20:55:03.211619 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:55:03.647821 sshd[4079]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:03.656522 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:50420.service: Deactivated successfully. Nov 12 20:55:03.658536 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:55:03.660225 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:55:03.667262 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:50432.service - OpenSSH per-connection server daemon (10.0.0.1:50432). Nov 12 20:55:03.668791 systemd-logind[1451]: Removed session 18. Nov 12 20:55:03.708785 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 50432 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:03.710975 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:03.716691 systemd-logind[1451]: New session 19 of user core. Nov 12 20:55:03.730818 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:55:05.371791 sshd[4091]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.383759 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:50432.service: Deactivated successfully. Nov 12 20:55:05.386394 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:55:05.389613 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:55:05.398953 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:50444.service - OpenSSH per-connection server daemon (10.0.0.1:50444). Nov 12 20:55:05.400440 systemd-logind[1451]: Removed session 19. Nov 12 20:55:05.440396 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 50444 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:05.443097 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.450143 systemd-logind[1451]: New session 20 of user core. Nov 12 20:55:05.470874 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:55:05.734654 sshd[4110]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.742920 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:50444.service: Deactivated successfully. Nov 12 20:55:05.746254 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:55:05.748247 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:55:05.757903 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:42490.service - OpenSSH per-connection server daemon (10.0.0.1:42490). Nov 12 20:55:05.759068 systemd-logind[1451]: Removed session 20. Nov 12 20:55:05.798850 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 42490 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:05.801342 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.806447 systemd-logind[1451]: New session 21 of user core. Nov 12 20:55:05.815688 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:55:05.953946 sshd[4122]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:05.958389 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:42490.service: Deactivated successfully. Nov 12 20:55:05.960919 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:55:05.961683 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:55:05.962812 systemd-logind[1451]: Removed session 21. Nov 12 20:55:08.437852 kubelet[2527]: E1112 20:55:08.437782 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:09.438141 kubelet[2527]: E1112 20:55:09.438092 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:10.966622 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:42506.service - OpenSSH per-connection server daemon (10.0.0.1:42506). Nov 12 20:55:11.008458 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 42506 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:11.011139 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:11.016204 systemd-logind[1451]: New session 22 of user core. Nov 12 20:55:11.025832 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:55:11.142277 sshd[4136]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:11.146787 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:42506.service: Deactivated successfully. Nov 12 20:55:11.149200 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:55:11.149896 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:55:11.151016 systemd-logind[1451]: Removed session 22. Nov 12 20:55:16.167935 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:33208.service - OpenSSH per-connection server daemon (10.0.0.1:33208). Nov 12 20:55:16.204084 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 33208 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:16.206030 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:16.210446 systemd-logind[1451]: New session 23 of user core. Nov 12 20:55:16.220714 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:55:16.442871 sshd[4154]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:16.448681 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:33208.service: Deactivated successfully. Nov 12 20:55:16.451776 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:55:16.452582 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:55:16.453836 systemd-logind[1451]: Removed session 23. Nov 12 20:55:21.461761 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:33224.service - OpenSSH per-connection server daemon (10.0.0.1:33224). Nov 12 20:55:21.502102 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 33224 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:21.504068 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:21.508804 systemd-logind[1451]: New session 24 of user core. Nov 12 20:55:21.516633 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:55:21.629912 sshd[4168]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:21.634925 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:33224.service: Deactivated successfully. Nov 12 20:55:21.637261 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:55:21.637957 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:55:21.639079 systemd-logind[1451]: Removed session 24. Nov 12 20:55:24.438130 kubelet[2527]: E1112 20:55:24.437885 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:26.642421 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:33202.service - OpenSSH per-connection server daemon (10.0.0.1:33202). Nov 12 20:55:26.684991 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 33202 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:26.687120 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:26.692614 systemd-logind[1451]: New session 25 of user core. Nov 12 20:55:26.703807 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:55:26.816331 sshd[4183]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:26.819672 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:33202.service: Deactivated successfully. Nov 12 20:55:26.822287 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:55:26.824563 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:55:26.826206 systemd-logind[1451]: Removed session 25. Nov 12 20:55:30.438186 kubelet[2527]: E1112 20:55:30.438100 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:31.828540 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:33204.service - OpenSSH per-connection server daemon (10.0.0.1:33204). Nov 12 20:55:31.868750 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 33204 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:31.870733 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:31.875869 systemd-logind[1451]: New session 26 of user core. Nov 12 20:55:31.891791 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:55:32.007625 sshd[4199]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:32.019466 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:33204.service: Deactivated successfully. Nov 12 20:55:32.021405 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:55:32.023118 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:55:32.030767 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:33212.service - OpenSSH per-connection server daemon (10.0.0.1:33212). Nov 12 20:55:32.031746 systemd-logind[1451]: Removed session 26. Nov 12 20:55:32.065776 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 33212 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:32.067292 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:32.071411 systemd-logind[1451]: New session 27 of user core. Nov 12 20:55:32.081601 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:55:32.437870 kubelet[2527]: E1112 20:55:32.437580 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:33.555444 containerd[1465]: time="2024-11-12T20:55:33.555260034Z" level=info msg="StopContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" with timeout 30 (s)" Nov 12 20:55:33.556297 containerd[1465]: time="2024-11-12T20:55:33.556267415Z" level=info msg="Stop container \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" with signal terminated" Nov 12 20:55:33.567326 systemd[1]: cri-containerd-c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589.scope: Deactivated successfully. Nov 12 20:55:33.583608 containerd[1465]: time="2024-11-12T20:55:33.583546738Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:55:33.592373 containerd[1465]: time="2024-11-12T20:55:33.592331486Z" level=info msg="StopContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" with timeout 2 (s)" Nov 12 20:55:33.593586 containerd[1465]: time="2024-11-12T20:55:33.592650058Z" level=info msg="Stop container \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" with signal terminated" Nov 12 20:55:33.594695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589-rootfs.mount: Deactivated successfully. Nov 12 20:55:33.602285 systemd-networkd[1405]: lxc_health: Link DOWN Nov 12 20:55:33.602299 systemd-networkd[1405]: lxc_health: Lost carrier Nov 12 20:55:33.607282 containerd[1465]: time="2024-11-12T20:55:33.607208631Z" level=info msg="shim disconnected" id=c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589 namespace=k8s.io Nov 12 20:55:33.607282 containerd[1465]: time="2024-11-12T20:55:33.607263663Z" level=warning msg="cleaning up after shim disconnected" id=c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589 namespace=k8s.io Nov 12 20:55:33.607282 containerd[1465]: time="2024-11-12T20:55:33.607271738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:33.628302 containerd[1465]: time="2024-11-12T20:55:33.628235139Z" level=info msg="StopContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" returns successfully" Nov 12 20:55:33.633562 systemd[1]: cri-containerd-3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb.scope: Deactivated successfully. Nov 12 20:55:33.633938 systemd[1]: cri-containerd-3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb.scope: Consumed 7.565s CPU time. Nov 12 20:55:33.634402 containerd[1465]: time="2024-11-12T20:55:33.634355107Z" level=info msg="StopPodSandbox for \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\"" Nov 12 20:55:33.634497 containerd[1465]: time="2024-11-12T20:55:33.634410400Z" level=info msg="Container to stop \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.637149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747-shm.mount: Deactivated successfully. Nov 12 20:55:33.645820 systemd[1]: cri-containerd-e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747.scope: Deactivated successfully. Nov 12 20:55:33.660184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb-rootfs.mount: Deactivated successfully. Nov 12 20:55:33.668805 containerd[1465]: time="2024-11-12T20:55:33.668716173Z" level=info msg="shim disconnected" id=3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb namespace=k8s.io Nov 12 20:55:33.668805 containerd[1465]: time="2024-11-12T20:55:33.668790372Z" level=warning msg="cleaning up after shim disconnected" id=3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb namespace=k8s.io Nov 12 20:55:33.668805 containerd[1465]: time="2024-11-12T20:55:33.668802044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:33.676593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747-rootfs.mount: Deactivated successfully. Nov 12 20:55:33.708966 containerd[1465]: time="2024-11-12T20:55:33.708877756Z" level=info msg="shim disconnected" id=e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747 namespace=k8s.io Nov 12 20:55:33.708966 containerd[1465]: time="2024-11-12T20:55:33.708936996Z" level=warning msg="cleaning up after shim disconnected" id=e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747 namespace=k8s.io Nov 12 20:55:33.708966 containerd[1465]: time="2024-11-12T20:55:33.708948217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:33.724516 containerd[1465]: time="2024-11-12T20:55:33.724401131Z" level=info msg="StopContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" returns successfully" Nov 12 20:55:33.725249 containerd[1465]: time="2024-11-12T20:55:33.725201728Z" level=info msg="StopPodSandbox for \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\"" Nov 12 20:55:33.725249 containerd[1465]: time="2024-11-12T20:55:33.725250378Z" level=info msg="Container to stop \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.725249 containerd[1465]: time="2024-11-12T20:55:33.725264254Z" level=info msg="Container to stop \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.725249 containerd[1465]: time="2024-11-12T20:55:33.725273682Z" level=info msg="Container to stop \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.725249 containerd[1465]: time="2024-11-12T20:55:33.725283370Z" level=info msg="Container to stop \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.725732 containerd[1465]: time="2024-11-12T20:55:33.725293578Z" level=info msg="Container to stop \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:33.734129 systemd[1]: cri-containerd-bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171.scope: Deactivated successfully. Nov 12 20:55:33.753374 containerd[1465]: time="2024-11-12T20:55:33.753072058Z" level=info msg="TearDown network for sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" successfully" Nov 12 20:55:33.753374 containerd[1465]: time="2024-11-12T20:55:33.753125638Z" level=info msg="StopPodSandbox for \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" returns successfully" Nov 12 20:55:33.766766 containerd[1465]: time="2024-11-12T20:55:33.766453004Z" level=info msg="shim disconnected" id=bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171 namespace=k8s.io Nov 12 20:55:33.766766 containerd[1465]: time="2024-11-12T20:55:33.766546007Z" level=warning msg="cleaning up after shim disconnected" id=bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171 namespace=k8s.io Nov 12 20:55:33.766766 containerd[1465]: time="2024-11-12T20:55:33.766555615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:33.785057 containerd[1465]: time="2024-11-12T20:55:33.784849376Z" level=info msg="TearDown network for sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" successfully" Nov 12 20:55:33.785057 containerd[1465]: time="2024-11-12T20:55:33.784891163Z" level=info msg="StopPodSandbox for \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" returns successfully" Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.805866 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-lib-modules\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.805923 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-cgroup\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.805947 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-bpf-maps\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.805978 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-cilium-config-path\") pod \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\" (UID: \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\") " Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.806005 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7v56\" (UniqueName: \"kubernetes.io/projected/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-kube-api-access-q7v56\") pod \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\" (UID: \"7bfa9b3c-019f-40bc-88f8-32f7332c08fe\") " Nov 12 20:55:33.807018 kubelet[2527]: I1112 20:55:33.806027 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bd25fa-a172-4014-badf-9010674439c3-cilium-config-path\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806049 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xt2m\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-kube-api-access-9xt2m\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806069 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-etc-cni-netd\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806087 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-net\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806106 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-run\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806129 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-hubble-tls\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807641 kubelet[2527]: I1112 20:55:33.806151 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-xtables-lock\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806171 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-hostproc\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806200 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-kernel\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806222 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bd25fa-a172-4014-badf-9010674439c3-clustermesh-secrets\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806245 2527 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cni-path\") pod \"93bd25fa-a172-4014-badf-9010674439c3\" (UID: \"93bd25fa-a172-4014-badf-9010674439c3\") " Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806116 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807784 kubelet[2527]: I1112 20:55:33.806147 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807958 kubelet[2527]: I1112 20:55:33.806161 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807958 kubelet[2527]: I1112 20:55:33.806203 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807958 kubelet[2527]: I1112 20:55:33.806565 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807958 kubelet[2527]: I1112 20:55:33.806645 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.807958 kubelet[2527]: I1112 20:55:33.806676 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.808117 kubelet[2527]: I1112 20:55:33.806693 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.808117 kubelet[2527]: I1112 20:55:33.807166 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.809843 kubelet[2527]: I1112 20:55:33.809766 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:33.812063 kubelet[2527]: I1112 20:55:33.811094 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93bd25fa-a172-4014-badf-9010674439c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:55:33.812063 kubelet[2527]: I1112 20:55:33.811998 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bd25fa-a172-4014-badf-9010674439c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:55:33.814030 kubelet[2527]: I1112 20:55:33.813988 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-kube-api-access-9xt2m" (OuterVolumeSpecName: "kube-api-access-9xt2m") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "kube-api-access-9xt2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:33.814339 kubelet[2527]: I1112 20:55:33.814301 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bfa9b3c-019f-40bc-88f8-32f7332c08fe" (UID: "7bfa9b3c-019f-40bc-88f8-32f7332c08fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:55:33.814619 kubelet[2527]: I1112 20:55:33.814583 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93bd25fa-a172-4014-badf-9010674439c3" (UID: "93bd25fa-a172-4014-badf-9010674439c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:33.814926 kubelet[2527]: I1112 20:55:33.814891 2527 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-kube-api-access-q7v56" (OuterVolumeSpecName: "kube-api-access-q7v56") pod "7bfa9b3c-019f-40bc-88f8-32f7332c08fe" (UID: "7bfa9b3c-019f-40bc-88f8-32f7332c08fe"). InnerVolumeSpecName "kube-api-access-q7v56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906395 2527 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906438 2527 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bd25fa-a172-4014-badf-9010674439c3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906448 2527 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906456 2527 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906465 2527 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906454 kubelet[2527]: I1112 20:55:33.906473 2527 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906500 2527 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906508 2527 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q7v56\" (UniqueName: \"kubernetes.io/projected/7bfa9b3c-019f-40bc-88f8-32f7332c08fe-kube-api-access-q7v56\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906516 2527 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bd25fa-a172-4014-badf-9010674439c3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906524 2527 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9xt2m\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-kube-api-access-9xt2m\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906531 2527 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906538 2527 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906548 2527 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906766 kubelet[2527]: I1112 20:55:33.906555 2527 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bd25fa-a172-4014-badf-9010674439c3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906967 kubelet[2527]: I1112 20:55:33.906563 2527 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:33.906967 kubelet[2527]: I1112 20:55:33.906570 2527 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bd25fa-a172-4014-badf-9010674439c3-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 20:55:34.177622 kubelet[2527]: I1112 20:55:34.177464 2527 scope.go:117] "RemoveContainer" containerID="c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589" Nov 12 20:55:34.180797 containerd[1465]: time="2024-11-12T20:55:34.180735240Z" level=info msg="RemoveContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\"" Nov 12 20:55:34.184631 systemd[1]: Removed slice kubepods-besteffort-pod7bfa9b3c_019f_40bc_88f8_32f7332c08fe.slice - libcontainer container kubepods-besteffort-pod7bfa9b3c_019f_40bc_88f8_32f7332c08fe.slice. Nov 12 20:55:34.327975 systemd[1]: Removed slice kubepods-burstable-pod93bd25fa_a172_4014_badf_9010674439c3.slice - libcontainer container kubepods-burstable-pod93bd25fa_a172_4014_badf_9010674439c3.slice. Nov 12 20:55:34.328094 systemd[1]: kubepods-burstable-pod93bd25fa_a172_4014_badf_9010674439c3.slice: Consumed 7.673s CPU time. Nov 12 20:55:34.491509 containerd[1465]: time="2024-11-12T20:55:34.491415689Z" level=info msg="RemoveContainer for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" returns successfully" Nov 12 20:55:34.491832 kubelet[2527]: I1112 20:55:34.491790 2527 scope.go:117] "RemoveContainer" containerID="c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589" Nov 12 20:55:34.494858 containerd[1465]: time="2024-11-12T20:55:34.494809796Z" level=error msg="ContainerStatus for \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\": not found" Nov 12 20:55:34.502692 kubelet[2527]: E1112 20:55:34.502640 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\": not found" containerID="c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589" Nov 12 20:55:34.502756 kubelet[2527]: I1112 20:55:34.502684 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589"} err="failed to get container status \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\": rpc error: code = NotFound desc = an error occurred when try to find container \"c147d6f1e009c83a0903ff529ffe269746663f19a328bd13e003e13087779589\": not found" Nov 12 20:55:34.502803 kubelet[2527]: I1112 20:55:34.502758 2527 scope.go:117] "RemoveContainer" containerID="3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb" Nov 12 20:55:34.503662 containerd[1465]: time="2024-11-12T20:55:34.503636873Z" level=info msg="RemoveContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\"" Nov 12 20:55:34.550202 containerd[1465]: time="2024-11-12T20:55:34.550129877Z" level=info msg="RemoveContainer for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" returns successfully" Nov 12 20:55:34.550494 kubelet[2527]: I1112 20:55:34.550443 2527 scope.go:117] "RemoveContainer" containerID="87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8" Nov 12 20:55:34.551573 containerd[1465]: time="2024-11-12T20:55:34.551537622Z" level=info msg="RemoveContainer for \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\"" Nov 12 20:55:34.559185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171-rootfs.mount: Deactivated successfully. Nov 12 20:55:34.559357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171-shm.mount: Deactivated successfully. Nov 12 20:55:34.559511 systemd[1]: var-lib-kubelet-pods-7bfa9b3c\x2d019f\x2d40bc\x2d88f8\x2d32f7332c08fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7v56.mount: Deactivated successfully. Nov 12 20:55:34.559632 systemd[1]: var-lib-kubelet-pods-93bd25fa\x2da172\x2d4014\x2dbadf\x2d9010674439c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9xt2m.mount: Deactivated successfully. Nov 12 20:55:34.559748 systemd[1]: var-lib-kubelet-pods-93bd25fa\x2da172\x2d4014\x2dbadf\x2d9010674439c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:55:34.559864 systemd[1]: var-lib-kubelet-pods-93bd25fa\x2da172\x2d4014\x2dbadf\x2d9010674439c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:55:34.565432 containerd[1465]: time="2024-11-12T20:55:34.565364820Z" level=info msg="RemoveContainer for \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\" returns successfully" Nov 12 20:55:34.566000 kubelet[2527]: I1112 20:55:34.565802 2527 scope.go:117] "RemoveContainer" containerID="68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f" Nov 12 20:55:34.567037 containerd[1465]: time="2024-11-12T20:55:34.566980191Z" level=info msg="RemoveContainer for \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\"" Nov 12 20:55:34.576379 containerd[1465]: time="2024-11-12T20:55:34.576310073Z" level=info msg="RemoveContainer for \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\" returns successfully" Nov 12 20:55:34.576849 kubelet[2527]: I1112 20:55:34.576717 2527 scope.go:117] "RemoveContainer" containerID="072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa" Nov 12 20:55:34.578258 containerd[1465]: time="2024-11-12T20:55:34.578218940Z" level=info msg="RemoveContainer for \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\"" Nov 12 20:55:34.586184 containerd[1465]: time="2024-11-12T20:55:34.586123785Z" level=info msg="RemoveContainer for \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\" returns successfully" Nov 12 20:55:34.586576 kubelet[2527]: I1112 20:55:34.586463 2527 scope.go:117] "RemoveContainer" containerID="9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d" Nov 12 20:55:34.588285 containerd[1465]: time="2024-11-12T20:55:34.588246158Z" level=info msg="RemoveContainer for \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\"" Nov 12 20:55:34.594147 containerd[1465]: time="2024-11-12T20:55:34.594091086Z" level=info msg="RemoveContainer for \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\" returns successfully" Nov 12 20:55:34.594532 kubelet[2527]: I1112 20:55:34.594351 2527 scope.go:117] "RemoveContainer" containerID="3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb" Nov 12 20:55:34.594692 containerd[1465]: time="2024-11-12T20:55:34.594642030Z" level=error msg="ContainerStatus for \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\": not found" Nov 12 20:55:34.594855 kubelet[2527]: E1112 20:55:34.594798 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\": not found" containerID="3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb" Nov 12 20:55:34.594855 kubelet[2527]: I1112 20:55:34.594832 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb"} err="failed to get container status \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"3993971f2322cda4060226e36ad0fbb1baad2e16432c3b47d36fb5e9d8dfcbeb\": not found" Nov 12 20:55:34.594855 kubelet[2527]: I1112 20:55:34.594855 2527 scope.go:117] "RemoveContainer" containerID="87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8" Nov 12 20:55:34.595130 containerd[1465]: time="2024-11-12T20:55:34.595081216Z" level=error msg="ContainerStatus for \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\": not found" Nov 12 20:55:34.595311 kubelet[2527]: E1112 20:55:34.595273 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\": not found" containerID="87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8" Nov 12 20:55:34.595358 kubelet[2527]: I1112 20:55:34.595327 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8"} err="failed to get container status \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\": rpc error: code = NotFound desc = an error occurred when try to find container \"87f25210a1dc1a04c142d1b04a55618ccd863e7c7bed363fb304367946277ab8\": not found" Nov 12 20:55:34.595406 kubelet[2527]: I1112 20:55:34.595362 2527 scope.go:117] "RemoveContainer" containerID="68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f" Nov 12 20:55:34.595691 containerd[1465]: time="2024-11-12T20:55:34.595649792Z" level=error msg="ContainerStatus for \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\": not found" Nov 12 20:55:34.595804 kubelet[2527]: E1112 20:55:34.595779 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\": not found" containerID="68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f" Nov 12 20:55:34.595845 kubelet[2527]: I1112 20:55:34.595805 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f"} err="failed to get container status \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\": rpc error: code = NotFound desc = an error occurred when try to find container \"68944318ab6388a7b89c365db1d8239ab82a3ca93ff6080c31acb4a74819c47f\": not found" Nov 12 20:55:34.595845 kubelet[2527]: I1112 20:55:34.595825 2527 scope.go:117] "RemoveContainer" containerID="072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa" Nov 12 20:55:34.596024 containerd[1465]: time="2024-11-12T20:55:34.595988331Z" level=error msg="ContainerStatus for \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\": not found" Nov 12 20:55:34.596107 kubelet[2527]: E1112 20:55:34.596087 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\": not found" containerID="072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa" Nov 12 20:55:34.596152 kubelet[2527]: I1112 20:55:34.596105 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa"} err="failed to get container status \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\": rpc error: code = NotFound desc = an error occurred when try to find container \"072f04b8b05f329a3e2bb518a540a93e74611d280989a2dc0910b12455f63afa\": not found" Nov 12 20:55:34.596152 kubelet[2527]: I1112 20:55:34.596120 2527 scope.go:117] "RemoveContainer" containerID="9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d" Nov 12 20:55:34.596320 containerd[1465]: time="2024-11-12T20:55:34.596284031Z" level=error msg="ContainerStatus for \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\": not found" Nov 12 20:55:34.596433 kubelet[2527]: E1112 20:55:34.596410 2527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\": not found" containerID="9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d" Nov 12 20:55:34.596510 kubelet[2527]: I1112 20:55:34.596431 2527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d"} err="failed to get container status \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bfcd32fc788b3caa0f71533264bdbcc7fa5a0e2981f5c6d14a5abae9b5ac56d\": not found" Nov 12 20:55:35.441094 kubelet[2527]: I1112 20:55:35.441035 2527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bfa9b3c-019f-40bc-88f8-32f7332c08fe" path="/var/lib/kubelet/pods/7bfa9b3c-019f-40bc-88f8-32f7332c08fe/volumes" Nov 12 20:55:35.441863 kubelet[2527]: I1112 20:55:35.441837 2527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93bd25fa-a172-4014-badf-9010674439c3" path="/var/lib/kubelet/pods/93bd25fa-a172-4014-badf-9010674439c3/volumes" Nov 12 20:55:35.519654 sshd[4214]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:35.532061 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:33212.service: Deactivated successfully. Nov 12 20:55:35.534441 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:55:35.536415 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:55:35.547075 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:33276.service - OpenSSH per-connection server daemon (10.0.0.1:33276). Nov 12 20:55:35.548775 systemd-logind[1451]: Removed session 27. Nov 12 20:55:35.587751 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 33276 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:35.589902 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:35.594880 systemd-logind[1451]: New session 28 of user core. Nov 12 20:55:35.605679 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:55:36.448668 sshd[4377]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:36.466174 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:33276.service: Deactivated successfully. Nov 12 20:55:36.469899 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:55:36.472956 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481171 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="cilium-agent" Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481206 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bfa9b3c-019f-40bc-88f8-32f7332c08fe" containerName="cilium-operator" Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481214 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="mount-cgroup" Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481220 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="apply-sysctl-overwrites" Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481227 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="mount-bpf-fs" Nov 12 20:55:36.481229 kubelet[2527]: E1112 20:55:36.481232 2527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="clean-cilium-state" Nov 12 20:55:36.483713 kubelet[2527]: I1112 20:55:36.481253 2527 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bfa9b3c-019f-40bc-88f8-32f7332c08fe" containerName="cilium-operator" Nov 12 20:55:36.483713 kubelet[2527]: I1112 20:55:36.481260 2527 memory_manager.go:354] "RemoveStaleState removing state" podUID="93bd25fa-a172-4014-badf-9010674439c3" containerName="cilium-agent" Nov 12 20:55:36.481275 systemd[1]: Started sshd@28-10.0.0.134:22-10.0.0.1:41552.service - OpenSSH per-connection server daemon (10.0.0.1:41552). Nov 12 20:55:36.485100 systemd-logind[1451]: Removed session 28. Nov 12 20:55:36.497229 systemd[1]: Created slice kubepods-burstable-pod8ef025a0_b6a3_4f83_ac88_20a02d28733b.slice - libcontainer container kubepods-burstable-pod8ef025a0_b6a3_4f83_ac88_20a02d28733b.slice. Nov 12 20:55:36.525376 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:36.527559 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:36.534496 systemd-logind[1451]: New session 29 of user core. Nov 12 20:55:36.542673 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:55:36.598665 sshd[4391]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:36.614374 systemd[1]: sshd@28-10.0.0.134:22-10.0.0.1:41552.service: Deactivated successfully. Nov 12 20:55:36.616912 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:55:36.619551 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:55:36.620632 kubelet[2527]: I1112 20:55:36.620595 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-bpf-maps\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620708 kubelet[2527]: I1112 20:55:36.620640 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-host-proc-sys-kernel\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620708 kubelet[2527]: I1112 20:55:36.620663 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ef025a0-b6a3-4f83-ac88-20a02d28733b-cilium-ipsec-secrets\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620708 kubelet[2527]: I1112 20:55:36.620682 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-etc-cni-netd\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620708 kubelet[2527]: I1112 20:55:36.620701 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jggvm\" (UniqueName: \"kubernetes.io/projected/8ef025a0-b6a3-4f83-ac88-20a02d28733b-kube-api-access-jggvm\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620851 kubelet[2527]: I1112 20:55:36.620797 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-xtables-lock\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620851 kubelet[2527]: I1112 20:55:36.620844 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-cilium-cgroup\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620924 kubelet[2527]: I1112 20:55:36.620861 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ef025a0-b6a3-4f83-ac88-20a02d28733b-clustermesh-secrets\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620924 kubelet[2527]: I1112 20:55:36.620899 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-host-proc-sys-net\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620991 kubelet[2527]: I1112 20:55:36.620937 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-cni-path\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.620991 kubelet[2527]: I1112 20:55:36.620955 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ef025a0-b6a3-4f83-ac88-20a02d28733b-cilium-config-path\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.621069 kubelet[2527]: I1112 20:55:36.620993 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ef025a0-b6a3-4f83-ac88-20a02d28733b-hubble-tls\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.621069 kubelet[2527]: I1112 20:55:36.621038 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-hostproc\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.621069 kubelet[2527]: I1112 20:55:36.621060 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-lib-modules\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.621165 kubelet[2527]: I1112 20:55:36.621103 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ef025a0-b6a3-4f83-ac88-20a02d28733b-cilium-run\") pod \"cilium-x8xnj\" (UID: \"8ef025a0-b6a3-4f83-ac88-20a02d28733b\") " pod="kube-system/cilium-x8xnj" Nov 12 20:55:36.625951 systemd[1]: Started sshd@29-10.0.0.134:22-10.0.0.1:41564.service - OpenSSH per-connection server daemon (10.0.0.1:41564). Nov 12 20:55:36.627030 systemd-logind[1451]: Removed session 29. Nov 12 20:55:36.665000 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 41564 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:36.666752 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:36.671903 systemd-logind[1451]: New session 30 of user core. Nov 12 20:55:36.683787 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 20:55:36.803150 kubelet[2527]: E1112 20:55:36.803090 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:36.804942 containerd[1465]: time="2024-11-12T20:55:36.804858890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8xnj,Uid:8ef025a0-b6a3-4f83-ac88-20a02d28733b,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:36.832117 containerd[1465]: time="2024-11-12T20:55:36.831941494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:36.832117 containerd[1465]: time="2024-11-12T20:55:36.832057440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:36.832117 containerd[1465]: time="2024-11-12T20:55:36.832078850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.833055 containerd[1465]: time="2024-11-12T20:55:36.832969544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:36.856868 systemd[1]: Started cri-containerd-7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339.scope - libcontainer container 7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339. Nov 12 20:55:36.886042 containerd[1465]: time="2024-11-12T20:55:36.885979898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8xnj,Uid:8ef025a0-b6a3-4f83-ac88-20a02d28733b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\"" Nov 12 20:55:36.886936 kubelet[2527]: E1112 20:55:36.886872 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:36.889935 containerd[1465]: time="2024-11-12T20:55:36.889756345Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:55:36.940350 containerd[1465]: time="2024-11-12T20:55:36.940238521Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e\"" Nov 12 20:55:36.941264 containerd[1465]: time="2024-11-12T20:55:36.941122884Z" level=info msg="StartContainer for \"554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e\"" Nov 12 20:55:36.976855 systemd[1]: Started cri-containerd-554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e.scope - libcontainer container 554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e. Nov 12 20:55:37.051570 systemd[1]: cri-containerd-554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e.scope: Deactivated successfully. Nov 12 20:55:37.055039 containerd[1465]: time="2024-11-12T20:55:37.054790411Z" level=info msg="StartContainer for \"554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e\" returns successfully" Nov 12 20:55:37.332360 kubelet[2527]: E1112 20:55:37.332052 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:37.655729 containerd[1465]: time="2024-11-12T20:55:37.655536485Z" level=info msg="shim disconnected" id=554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e namespace=k8s.io Nov 12 20:55:37.655729 containerd[1465]: time="2024-11-12T20:55:37.655605783Z" level=warning msg="cleaning up after shim disconnected" id=554f41595e6594403cc4b46fd199b3d4ea424fab770119157fb8b4d66bc0a00e namespace=k8s.io Nov 12 20:55:37.655729 containerd[1465]: time="2024-11-12T20:55:37.655614199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:38.335612 kubelet[2527]: E1112 20:55:38.335560 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:38.339748 containerd[1465]: time="2024-11-12T20:55:38.339689876Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:55:38.491309 kubelet[2527]: E1112 20:55:38.491254 2527 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:55:38.645167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202961528.mount: Deactivated successfully. Nov 12 20:55:38.849208 containerd[1465]: time="2024-11-12T20:55:38.849137912Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70\"" Nov 12 20:55:38.849794 containerd[1465]: time="2024-11-12T20:55:38.849765599Z" level=info msg="StartContainer for \"5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70\"" Nov 12 20:55:38.885737 systemd[1]: Started cri-containerd-5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70.scope - libcontainer container 5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70. Nov 12 20:55:38.921446 systemd[1]: cri-containerd-5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70.scope: Deactivated successfully. Nov 12 20:55:39.000382 containerd[1465]: time="2024-11-12T20:55:39.000022996Z" level=info msg="StartContainer for \"5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70\" returns successfully" Nov 12 20:55:39.020262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70-rootfs.mount: Deactivated successfully. Nov 12 20:55:39.126808 containerd[1465]: time="2024-11-12T20:55:39.126707974Z" level=info msg="shim disconnected" id=5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70 namespace=k8s.io Nov 12 20:55:39.126808 containerd[1465]: time="2024-11-12T20:55:39.126765882Z" level=warning msg="cleaning up after shim disconnected" id=5542cef80db39f88e387c80b5884bd587c96956a5997d605d80fc13894264a70 namespace=k8s.io Nov 12 20:55:39.126808 containerd[1465]: time="2024-11-12T20:55:39.126774067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:39.351502 kubelet[2527]: E1112 20:55:39.350026 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:39.352546 containerd[1465]: time="2024-11-12T20:55:39.352492810Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:55:39.465958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160156473.mount: Deactivated successfully. Nov 12 20:55:39.470095 containerd[1465]: time="2024-11-12T20:55:39.470015669Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7\"" Nov 12 20:55:39.470869 containerd[1465]: time="2024-11-12T20:55:39.470656821Z" level=info msg="StartContainer for \"81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7\"" Nov 12 20:55:39.504681 systemd[1]: Started cri-containerd-81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7.scope - libcontainer container 81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7. Nov 12 20:55:39.537208 containerd[1465]: time="2024-11-12T20:55:39.537021899Z" level=info msg="StartContainer for \"81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7\" returns successfully" Nov 12 20:55:39.540321 systemd[1]: cri-containerd-81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7.scope: Deactivated successfully. Nov 12 20:55:39.737921 containerd[1465]: time="2024-11-12T20:55:39.737846288Z" level=info msg="shim disconnected" id=81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7 namespace=k8s.io Nov 12 20:55:39.737921 containerd[1465]: time="2024-11-12T20:55:39.737908052Z" level=warning msg="cleaning up after shim disconnected" id=81818d84715f4fef939467ae67b1eb0d70836ad51688d4441f11eabe3cf45df7 namespace=k8s.io Nov 12 20:55:39.737921 containerd[1465]: time="2024-11-12T20:55:39.737919744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:40.355370 kubelet[2527]: E1112 20:55:40.354501 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:40.358285 containerd[1465]: time="2024-11-12T20:55:40.358223500Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:55:40.377564 containerd[1465]: time="2024-11-12T20:55:40.377512437Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91\"" Nov 12 20:55:40.378212 containerd[1465]: time="2024-11-12T20:55:40.378184957Z" level=info msg="StartContainer for \"2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91\"" Nov 12 20:55:40.411758 systemd[1]: Started cri-containerd-2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91.scope - libcontainer container 2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91. Nov 12 20:55:40.441277 systemd[1]: cri-containerd-2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91.scope: Deactivated successfully. Nov 12 20:55:40.500903 containerd[1465]: time="2024-11-12T20:55:40.500802008Z" level=info msg="StartContainer for \"2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91\" returns successfully" Nov 12 20:55:40.531781 containerd[1465]: time="2024-11-12T20:55:40.531699206Z" level=info msg="shim disconnected" id=2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91 namespace=k8s.io Nov 12 20:55:40.531781 containerd[1465]: time="2024-11-12T20:55:40.531772152Z" level=warning msg="cleaning up after shim disconnected" id=2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91 namespace=k8s.io Nov 12 20:55:40.531781 containerd[1465]: time="2024-11-12T20:55:40.531784153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:40.862956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c58e9480302143159a7c1f9da16e83dc28008a7ed4b356c7e99184939933e91-rootfs.mount: Deactivated successfully. Nov 12 20:55:41.358144 kubelet[2527]: E1112 20:55:41.358106 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:41.360443 containerd[1465]: time="2024-11-12T20:55:41.360402652Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:55:41.377649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26335971.mount: Deactivated successfully. Nov 12 20:55:41.381287 containerd[1465]: time="2024-11-12T20:55:41.381239901Z" level=info msg="CreateContainer within sandbox \"7e00d5637b3b77d8906b5dec3dc15ebe2eeaf3c80c46f141636123eda87aa339\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482\"" Nov 12 20:55:41.381856 containerd[1465]: time="2024-11-12T20:55:41.381813818Z" level=info msg="StartContainer for \"910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482\"" Nov 12 20:55:41.424626 systemd[1]: Started cri-containerd-910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482.scope - libcontainer container 910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482. Nov 12 20:55:41.456102 containerd[1465]: time="2024-11-12T20:55:41.456037461Z" level=info msg="StartContainer for \"910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482\" returns successfully" Nov 12 20:55:41.862196 systemd[1]: run-containerd-runc-k8s.io-910f2cfabe7ccca51ee3da7953933cff0c456f728d562af21207f24c3cf6e482-runc.BrKnmz.mount: Deactivated successfully. Nov 12 20:55:41.906521 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:55:42.363256 kubelet[2527]: E1112 20:55:42.363220 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:42.379163 kubelet[2527]: I1112 20:55:42.379083 2527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x8xnj" podStartSLOduration=6.379063256 podStartE2EDuration="6.379063256s" podCreationTimestamp="2024-11-12 20:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:42.378959493 +0000 UTC m=+109.042922217" watchObservedRunningTime="2024-11-12 20:55:42.379063256 +0000 UTC m=+109.043025979" Nov 12 20:55:43.364827 kubelet[2527]: E1112 20:55:43.364785 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:45.482867 systemd-networkd[1405]: lxc_health: Link UP Nov 12 20:55:45.491792 systemd-networkd[1405]: lxc_health: Gained carrier Nov 12 20:55:46.804110 kubelet[2527]: E1112 20:55:46.804066 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:47.309716 systemd-networkd[1405]: lxc_health: Gained IPv6LL Nov 12 20:55:47.373555 kubelet[2527]: E1112 20:55:47.373516 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:48.375557 kubelet[2527]: E1112 20:55:48.375510 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:51.438199 kubelet[2527]: E1112 20:55:51.438123 2527 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:52.330847 sshd[4402]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:52.335402 systemd[1]: sshd@29-10.0.0.134:22-10.0.0.1:41564.service: Deactivated successfully. Nov 12 20:55:52.337874 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 20:55:52.338554 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Nov 12 20:55:52.339640 systemd-logind[1451]: Removed session 30. Nov 12 20:55:53.440098 containerd[1465]: time="2024-11-12T20:55:53.440044935Z" level=info msg="StopPodSandbox for \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\"" Nov 12 20:55:53.440576 containerd[1465]: time="2024-11-12T20:55:53.440147696Z" level=info msg="TearDown network for sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" successfully" Nov 12 20:55:53.440576 containerd[1465]: time="2024-11-12T20:55:53.440159849Z" level=info msg="StopPodSandbox for \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" returns successfully" Nov 12 20:55:53.440750 containerd[1465]: time="2024-11-12T20:55:53.440715894Z" level=info msg="RemovePodSandbox for \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\"" Nov 12 20:55:53.440780 containerd[1465]: time="2024-11-12T20:55:53.440749336Z" level=info msg="Forcibly stopping sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\"" Nov 12 20:55:53.440866 containerd[1465]: time="2024-11-12T20:55:53.440847939Z" level=info msg="TearDown network for sandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" successfully" Nov 12 20:55:53.902252 containerd[1465]: time="2024-11-12T20:55:53.902158143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:53.902252 containerd[1465]: time="2024-11-12T20:55:53.902262709Z" level=info msg="RemovePodSandbox \"e33b3dd4959568d8da449b1f8d61920edd4c12398e215bda98b7b0fa3ecad747\" returns successfully" Nov 12 20:55:53.903013 containerd[1465]: time="2024-11-12T20:55:53.902847877Z" level=info msg="StopPodSandbox for \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\"" Nov 12 20:55:53.903013 containerd[1465]: time="2024-11-12T20:55:53.902945429Z" level=info msg="TearDown network for sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" successfully" Nov 12 20:55:53.903013 containerd[1465]: time="2024-11-12T20:55:53.902957311Z" level=info msg="StopPodSandbox for \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" returns successfully" Nov 12 20:55:53.903467 containerd[1465]: time="2024-11-12T20:55:53.903327690Z" level=info msg="RemovePodSandbox for \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\"" Nov 12 20:55:53.903467 containerd[1465]: time="2024-11-12T20:55:53.903392571Z" level=info msg="Forcibly stopping sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\"" Nov 12 20:55:53.903745 containerd[1465]: time="2024-11-12T20:55:53.903528283Z" level=info msg="TearDown network for sandbox \"bec1f4a6cc42f260a25c72a160b89c5cf9375740b0dd82c2708deaa0c1b27171\" successfully"